Skip to content

Openrouter integration#14

Open
stevenrayhinojosa-gmail-com wants to merge 2 commits intovibing-ai:mainfrom
stevenrayhinojosa-gmail-com:openrouter-integration
Open

Openrouter integration#14
stevenrayhinojosa-gmail-com wants to merge 2 commits intovibing-ai:mainfrom
stevenrayhinojosa-gmail-com:openrouter-integration

Conversation

@stevenrayhinojosa-gmail-com
Copy link
Copy Markdown

@stevenrayhinojosa-gmail-com stevenrayhinojosa-gmail-com commented May 22, 2025

OpenRouter Support Added to VeriFact
What’s This About?
I swapped out the direct OpenAI API key requirement and added support for OpenRouter instead. Now VeriFact can use a bunch of different models without needing a standard OpenAI key, which makes things more flexible and potentially cheaper.

What Changed
New Stuff

openrouter_config.py: A config file that reroutes OpenAI calls to OpenRouter

verifact_manager_openrouter.py: A version of the VerifactManager that works directly with OpenRouter

.env.template: A clean template for setting up environment variables (no secrets included)

Updated Files

factcheck.py, verifact_manager.py, and all the agent files (claim_detector, evidence_hunter, verdict_writer) now use the OpenRouter setup

Behind the Scenes
I built a little adapter layer that pretends to be OpenAI but actually uses OpenRouter

The manager code was adjusted so it doesn’t rely on the OpenAI Agents SDK anymore

The API server now runs using the new setup

How to Try It Out
Copy .env.template to .env and drop in your OPENROUTER_API_KEY

Run the server (e.g., uvicorn command as usual)

Send some claims to the /factcheck endpoint

Or, run the new manager script directly to see it work

Tools Used
Python 3.10+

FastAPI for the API

Pydantic for validation

httpx for making requests

OpenRouter API

(Partially replacing) OpenAI Agents SDK

Why This is Cool
You’re not stuck with just OpenAI anymore

Easier to set up — only need one API key

Can use different models with no code changes

Should be faster and leaner overall

A Few Limitations
The OpenAI SDK still doesn’t play 100% nice with OpenRouter

A few advanced SDK features might not work right now

What’s Next
Add caching to speed things up

Use more features from OpenRouter

Clean up error handling and add more logging

Quick Security Notes
Secrets are in .env (not committed)

.env.template is safe for sharing

All requests go over HTTPS

Things I Checked
Server runs fine

Factcheck endpoint works

New manager runs okay

Handles errors

Loads .env without problems

No secrets in code

Related Issue
Resolves #XX: Moved to OpenRouter instead of OpenAI API

Summary by CodeRabbit

  • New Features

    • Added a new fact-checking manager using OpenRouter, enabling claim detection, evidence gathering, and verdict generation via a three-agent pipeline.
    • Introduced configuration options for OpenRouter integration and environment setup.
    • Added scripts and templates to automate test execution and environment configuration.
  • Bug Fixes

    • Improved error handling and fallback mechanisms in the fact-checking pipeline.
  • Documentation

    • Added comprehensive documentation for tests, fixtures, and usage instructions.
  • Tests

    • Introduced extensive unit, integration, end-to-end, and performance test suites with mock data factories and utilities.
  • Chores

    • Added configuration files for coverage reporting and package environment tracking.
    • Updated type annotations and imports for improved compatibility and clarity.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented May 22, 2025

Walkthrough

This update introduces comprehensive testing infrastructure and configuration for the VeriFact fact-checking pipeline. It adds full test coverage with unit, integration, end-to-end, performance, and error recovery tests, along with detailed fixtures, utilities, and documentation. The fact-checking API endpoint is updated to use an asynchronous pipeline manager. OpenAI client configuration is adapted for OpenRouter usage, and a new OpenRouter-based pipeline manager is provided. Extensive environment and coverage configuration files are included, as well as a script to automate test execution and coverage enforcement.

Changes

File(s) Change Summary
.coveragerc, .env.template, =0.0.15, run_tests_with_coverage.sh Added configuration files for coverage measurement, environment variables, package status, and a test runner script with coverage enforcement.
src/api/factcheck.py Updated the factcheck endpoint to use an asynchronous VerifactManager pipeline, returning dynamic verdicts with error handling and resource cleanup.
src/utils/openrouter_config.py New module to configure the OpenAI client for OpenRouter API usage, setting environment variables and client parameters.
src/verifact_agents/claim_detector.py, src/verifact_agents/evidence_hunter.py, src/verifact_agents/verdict_writer.py Added import of OpenRouter configuration to ensure correct OpenAI client setup before agent logic executes.
src/verifact_manager.py Refactored type hints to use standard typing constructs, updated agent import paths, added OpenRouter configuration import, and improved method signatures for clarity and error handling.
src/verifact_manager_openrouter.py New module implementing a three-agent fact-checking pipeline using direct OpenRouter API calls, with async HTTP, error handling, and data models.
src/tests/README.md, src/tests/fixtures/README.md Added documentation for test directory structure, test types, fixture usage, and instructions for running and writing tests.
src/tests/conftest.py New pytest configuration file with environment loading, custom markers, logging setup, and fixtures for mocking API keys.
src/tests/fixtures/__init__.py, src/tests/integration/__init__.py, src/tests/utils/__init__.py Added __init__.py files with docstrings to establish Python packages for fixtures, integration tests, and utilities.
src/tests/fixtures/claims.py, src/tests/fixtures/evidence.py, src/tests/fixtures/verdicts.py Added categorized sample data for claims, evidence, and verdicts to support consistent and reusable testing across domains.
src/tests/integration/test_end_to_end.py, src/tests/integration/test_pipeline_integration.py Introduced end-to-end and integration tests for the full pipeline, using real and mocked agents to validate pipeline correctness and agent interactions.
src/tests/test_claim_edge_cases.py, src/tests/test_claim_types.py, src/tests/test_complex_claims.py, src/tests/test_data_flow.py, src/tests/test_error_recovery.py, src/tests/test_fixtures.py, src/tests/test_performance.py, src/tests/test_verifact_manager.py Added comprehensive unit, edge case, data flow, error recovery, performance, and manager tests using pytest and extensive mocking to validate all aspects of the pipeline.
src/tests/utils/mock_data_factory.py New factory module for generating randomized and scenario-based mock data for claims, evidence, and verdicts, supporting robust testing.
src/tests/utils/performance_utils.py New module for tracking, benchmarking, and analyzing pipeline performance, including timing, parallelism efficiency, and statistical reporting.

Sequence Diagram(s)

Fact-Checking Pipeline (API Endpoint to Verdicts)

sequenceDiagram
    participant Client
    participant API (factcheck)
    participant VerifactManager
    participant ClaimDetector
    participant EvidenceHunter
    participant VerdictWriter

    Client->>API (factcheck): POST /factcheck (text)
    API->>VerifactManager: run(text)
    VerifactManager->>ClaimDetector: detect_claims(text)
    ClaimDetector-->>VerifactManager: [claims]
    loop For each claim
        VerifactManager->>EvidenceHunter: gather_evidence(claim)
        EvidenceHunter-->>VerifactManager: [evidence]
        VerifactManager->>VerdictWriter: generate_verdict(claim, evidence)
        VerdictWriter-->>VerifactManager: verdict
    end
    VerifactManager-->>API (factcheck): [verdicts]
    API (factcheck)-->>Client: FactCheckResponse (claims, verdicts, metadata)
Loading

OpenRouter Pipeline Manager

sequenceDiagram
    participant User
    participant VerifactManager (OpenRouter)
    participant OpenRouter API

    User->>VerifactManager: run(query)
    VerifactManager->>OpenRouter API: chat_completion (ClaimDetector prompt)
    OpenRouter API-->>VerifactManager: detected claims
    loop For each claim
        VerifactManager->>OpenRouter API: chat_completion (EvidenceHunter prompt)
        OpenRouter API-->>VerifactManager: evidence
    end
    loop For each claim+evidence
        VerifactManager->>OpenRouter API: chat_completion (VerdictWriter prompt)
        OpenRouter API-->>VerifactManager: verdict
    end
    VerifactManager-->>User: [verdicts]
Loading

Poem

In the warren where code and logic meet,
New tests and fixtures hop in—oh, what a treat!
Pipelines checked, verdicts made clear,
With coverage and config, we leap without fear.
OpenRouter’s path now set by our paws,
Fact-checking with rigor—applause, applause!
🐇✨

Note

⚡️ AI Code Reviews for VS Code, Cursor, Windsurf

CodeRabbit now has a plugin for VS Code, Cursor and Windsurf. This brings AI code reviews directly in the code editor. Each commit is reviewed immediately, finding bugs before the PR is raised. Seamless context handoff to your AI code agent ensures that you can easily incorporate review feedback.
Learn more here.


Note

⚡️ Faster reviews with caching

CodeRabbit now supports caching for code and dependencies, helping speed up reviews. This means quicker feedback, reduced wait times, and a smoother review experience overall. Cached data is encrypted and stored securely. This feature will be automatically enabled for all accounts on May 30th. To opt out, configure Review - Disable Cache at either the organization or repository level. If you prefer to disable all data retention across your organization, simply turn off the Data Retention setting under your Organization Settings.
Enjoy the performance boost—your workflow just got faster.

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

🔭 Outside diff range comments (5)
=0.0.15 (1)

1-34: ⚠️ Potential issue

Remove user-specific pip freeze output
This file appears to be a local environment snapshot with absolute paths to a user’s home directory. It should not be committed to the repository. Consider deleting it or converting it to a sanitized requirements.txt (without system paths) or using a Pipfile/poetry.lock.

run_tests_with_coverage.sh (1)

1-42: 🛠️ Refactor suggestion

Make script executable and add proper error handling.

The shell script should be made executable, and it could benefit from better error handling throughout the script.

#!/bin/bash
+set -e  # Exit immediately if a command exits with a non-zero status
+
+# Display help message
+function show_help {
+    echo "Usage: $0 [OPTIONS]"
+    echo "Run tests with coverage for the VeriFact project"
+    echo ""
+    echo "Options:"
+    echo "  --help            Show this help message and exit"
+    echo "  --skip-install    Skip installing dependencies"
+    echo "  --unit            Run only unit tests"
+    echo "  --integration     Run only integration tests"
+    echo "  --e2e             Run only end-to-end tests"
+    echo ""
+}
+
+# Parse arguments
+SKIP_INSTALL=false
+TEST_TYPE="all"
+
+while [[ $# -gt 0 ]]; do
+    case "$1" in
+        --help)
+            show_help
+            exit 0
+            ;;
+        --skip-install)
+            SKIP_INSTALL=true
+            shift
+            ;;
+        --unit|--integration|--e2e)
+            TEST_TYPE="${1#--}"
+            shift
+            ;;
+        *)
+            echo "Unknown option: $1"
+            show_help
+            exit 1
+            ;;
+    esac
+done

# Activate virtual environment if it exists
if [ -d "venv" ]; then
    source venv/bin/activate
elif [ -d "env" ]; then
    source env/bin/activate
fi

# Find Python executable
if command -v python3 &> /dev/null; then
    PYTHON=python3
elif command -v python &> /dev/null; then
    PYTHON=python
else
    echo "Python not found. Please install Python 3."
    exit 1
fi

# Install required packages
+if [ "$SKIP_INSTALL" = false ]; then
+    echo "Installing dependencies..."
    $PYTHON -m pip install --user pytest pytest-cov pytest-asyncio python-dotenv pydantic openai openai-agents>=0.0.15
+fi

# Run tests with coverage
+case "$TEST_TYPE" in
+    unit)
+        echo "Running unit tests only..."
+        $PYTHON -m pytest src/tests/ -m unit --cov=src --cov-report=term --cov-report=html -v
+        ;;
+    integration)
+        echo "Running integration tests only..."
+        $PYTHON -m pytest src/tests/ -m integration --cov=src --cov-report=term --cov-report=html -v
+        ;;
+    e2e)
+        echo "Running end-to-end tests only..."
+        $PYTHON -m pytest src/tests/ -m e2e --cov=src --cov-report=term --cov-report=html -v
+        ;;
+    all|*)
+        echo "Running all tests..."
+        $PYTHON -m pytest src/tests/ --cov=src --cov-report=term --cov-report=html -v
+        ;;
+esac

# Print coverage report
echo "Coverage report generated in coverage_html_report/"
echo "Open coverage_html_report/index.html in a browser to view the report"

# Check if coverage is at least 80%
COVERAGE=$($PYTHON -m coverage report | grep TOTAL | awk '{print $4}' | sed 's/%//')
if [ -z "$COVERAGE" ]; then
    echo "Could not determine coverage percentage."
    exit 1
+# Try to use Python for comparison as a more portable solution
+elif $PYTHON -c "exit(0 if float('${COVERAGE}') >= 80 else 1)"; then
+    echo "Coverage is at or above 80% (${COVERAGE}%)"
+    exit 0
else
-elif (( $(echo "$COVERAGE < 80" | bc -l 2>/dev/null) )); then
    echo "Coverage is below 80% (${COVERAGE}%)"
    exit 1
-else
-    echo "Coverage is at or above 80% (${COVERAGE}%)"
-    exit 0
fi
src/tests/integration/test_end_to_end.py (1)

1-195: 🛠️ Refactor suggestion

Add test for error handling and recovery.

The current tests verify successful processing but don't test how the system handles API errors or timeouts. Consider adding a test that simulates API failures to verify the retry and fallback mechanisms.

You could add a test like this:

@pytest.mark.asyncio
async def test_error_recovery(monkeypatch, manager):
    """Test the pipeline's ability to recover from API errors."""
    # Mock the API call to fail initially but succeed on retry
    original_run = manager._run_agent
    call_count = [0]
    
    async def mock_run_agent(*args, **kwargs):
        call_count[0] += 1
        if call_count[0] == 1:  # First call fails
            raise Exception("Simulated API error")
        else:  # Subsequent calls succeed
            return await original_run(*args, **kwargs)
    
    monkeypatch.setattr(manager, "_run_agent", mock_run_agent)
    
    # Run the pipeline with a simple claim
    text = "The Earth is round."
    results = await manager.run(text)
    
    # Verify results
    assert len(results) > 0
    assert call_count[0] > 1  # Confirm that retry occurred
    assert all(isinstance(verdict, Verdict) for verdict in results)
src/tests/test_claim_edge_cases.py (1)

262-301: 🛠️ Refactor suggestion

Improve the test for multiple claims with failure.

The current test raises an exception in the manager's run method, but never verifies that partial results could be handled. Consider adding a test case where raise_exceptions=False in the manager config to test resilience.

@pytest.mark.asyncio
@patch("src.verifact_manager.Runner.run")
async def test_multiple_claims_one_fails(mock_run, manager):
    """Test the pipeline when one claim fails during evidence gathering."""
    # Sample claims
    claim1 = POLITICAL_CLAIMS[0]
    claim2 = POLITICAL_CLAIMS[1]
    
    # Sample evidence and verdict
    sample_evidence = POLITICAL_EVIDENCE["US military budget"]
    sample_verdict = POLITICAL_VERDICTS[0]
    
    # Configure mock to return different results for different agent calls
    call_count = 0
    def mock_runner_side_effect(*args, **kwargs):
        nonlocal call_count
        agent = args[0]
        
        if agent.__dict__.get('name') == 'ClaimDetector':
            return MockRunnerResult([claim1, claim2])
        elif agent.__dict__.get('name') == 'EvidenceHunter':
            # First evidence gathering succeeds, second fails
            if call_count == 0:
                call_count += 1
                return MockRunnerResult(sample_evidence)
            else:
                raise Exception("Evidence gathering error for second claim")
        elif agent.__dict__.get('name') == 'VerdictWriter':
            return MockRunnerResult(sample_verdict)
        return MockRunnerResult([])
    
    mock_run.side_effect = mock_runner_side_effect
    
    # Run the pipeline - should continue despite one claim failing
    with pytest.raises(Exception):
        await manager.run(SAMPLE_TEXTS[0])
    
    # Verify the mock was called multiple times
    assert mock_run.call_count >= 3

+@pytest.mark.asyncio
+@patch("src.verifact_manager.Runner.run")
+async def test_multiple_claims_one_fails_with_fallback(mock_run):
+    """Test the pipeline with fallback when one claim fails during evidence gathering."""
+    # Sample claims
+    claim1 = POLITICAL_CLAIMS[0]
+    claim2 = POLITICAL_CLAIMS[1]
+    
+    # Sample evidence and verdict
+    sample_evidence = POLITICAL_EVIDENCE["US military budget"]
+    sample_verdict = POLITICAL_VERDICTS[0]
+    
+    # Configure mock to return different results for different agent calls
+    call_count = 0
+    def mock_runner_side_effect(*args, **kwargs):
+        nonlocal call_count
+        agent = args[0]
+        
+        if agent.__dict__.get('name') == 'ClaimDetector':
+            return MockRunnerResult([claim1, claim2])
+        elif agent.__dict__.get('name') == 'EvidenceHunter':
+            # First evidence gathering succeeds, second fails
+            if call_count == 0:
+                call_count += 1
+                return MockRunnerResult(sample_evidence)
+            else:
+                raise Exception("Evidence gathering error for second claim")
+        elif agent.__dict__.get('name') == 'VerdictWriter':
+            return MockRunnerResult(sample_verdict)
+        return MockRunnerResult([])
+    
+    mock_run.side_effect = mock_runner_side_effect
+    
+    # Create manager with fallback enabled and exceptions disabled
+    config = ManagerConfig(
+        min_checkworthiness=0.5,
+        max_claims=5,
+        evidence_per_claim=3,
+        timeout_seconds=30.0,
+        enable_fallbacks=True,
+        retry_attempts=1,
+        raise_exceptions=False,
+        include_debug_info=False,
+    )
+    manager = VerifactManager(config)
+    
+    # Run the pipeline - should continue despite one claim failing
+    results = await manager.run(SAMPLE_TEXTS[0])
+    
+    # Verify we get at least one result back
+    assert len(results) == 1
+    assert results[0] == sample_verdict
+    
+    # Verify the mock was called multiple times
+    assert mock_run.call_count >= 3
🧰 Tools
🪛 Ruff (0.11.9)

269-269: Blank line contains whitespace

Remove whitespace from blank line

(W293)


273-273: Blank line contains whitespace

Remove whitespace from blank line

(W293)


279-279: Blank line contains whitespace

Remove whitespace from blank line

(W293)


292-292: Blank line contains whitespace

Remove whitespace from blank line

(W293)


294-294: Blank line contains whitespace

Remove whitespace from blank line

(W293)


296-296: Do not assert blind exception: Exception

(B017)


298-298: Blank line contains whitespace

Remove whitespace from blank line

(W293)

src/verifact_manager.py (1)

26-37: 🛠️ Refactor suggestion

max_claims & min_checkworthiness are never enforced

Several new tests rely on these knobs (test_claim_filtering, test_max_claims_limit), but the manager processes every claim it receives.
Add a filtering step right after _detect_claims:

+# After detecting claims
+claims = sorted(claims, key=lambda c: getattr(c, "context", 1.0), reverse=True)
+claims = [c for c in claims if getattr(c, "context", 1.0) >= self.config.min_checkworthiness]
+if self.config.max_claims:
+    claims = claims[: self.config.max_claims]
+
+if not claims:
+    logger.info("No claims met the worthiness threshold")
+    return []

Without this, the newly added tests will fail.

🧰 Tools
🪛 Ruff (0.11.9)

30-30: Use X | Y for type annotations

Convert to X | Y

(UP007)

🧹 Nitpick comments (54)
.env.template (4)

1-4: Header and file description.
The header sections and separators clearly organize the template. Consider adding a top-level note reminding users that this is a template and should not contain real secrets.


6-31: Model access and selection defaults.
Default model environment variables are well documented and placeholders are obvious. Ensure users replace OPENROUTER_API_KEY before running; you may want to call out fallback behavior or validation at startup if the key is missing.


40-49: Application configuration.
API host/port and rate-limiting defaults look sensible. However, DEFAULT_API_KEY=verifact-default-key could be too permissive in production—consider forcing override or generating a unique default per deployment.


59-69: Advanced configuration.
Model caching and logging settings offer good flexibility. You may want to clarify the cache eviction policy when MODEL_CACHE_SIZE is reached and document the format options for LOG_FORMAT.

src/verifact_agents/verdict_writer.py (1)

1-3: Prefer explicit imports over wildcard
Wildcard imports (from src.utils.openrouter_config import *) introduce all names into the module namespace, making it harder to trace dependencies and increasing the risk of name collisions. Replace this with explicit imports of only the needed configuration symbols.

🧰 Tools
🪛 Ruff (0.11.9)

2-2: from src.utils.openrouter_config import * used; unable to detect undefined names

(F403)

src/verifact_agents/evidence_hunter.py (1)

1-3: Prefer explicit imports over wildcard
Using import * from src.utils.openrouter_config hides which names are actually used and pollutes the namespace. Import only the specific functions or variables required for configuration.

🧰 Tools
🪛 Ruff (0.11.9)

2-2: from src.utils.openrouter_config import * used; unable to detect undefined names

(F403)

src/api/factcheck.py (4)

4-4: Remove unused import.

The asyncio module is imported but not used anywhere in the code. This could lead to confusion about the dependencies of this module.

-import asyncio
🧰 Tools
🪛 Ruff (0.11.9)

4-4: asyncio imported but unused

Remove unused import: asyncio

(F401)


16-16: Add a docstring to the factcheck function.

This public API endpoint function is missing a docstring that explains its purpose, parameters, and return values.

@router.post("/factcheck", response_model=FactCheckResponse)
async def factcheck(request: FactCheckRequest):
+    """
+    Process a fact check request and return claims with verdicts.
+    
+    Args:
+        request (FactCheckRequest): The request containing text to fact check
+        
+    Returns:
+        FactCheckResponse: The fact checking results with claims and metadata
+    """
    start_time = time.time()
🧰 Tools
🪛 Ruff (0.11.9)

16-16: Missing docstring in public function

(D103)


29-35: Consider dynamic source metadata instead of hardcoded values.

The credibility and quote values are hardcoded for all sources, which might not accurately represent the varying reliability of different sources.

            for source_url in verdict.sources:
                sources_list.append(
                    Source(
                        url=source_url,
-                        credibility=0.9,  # Default credibility
-                        quote="Evidence from source"  # Default quote
+                        credibility=verdict.get_source_credibility(source_url, default=0.9),
+                        quote=verdict.get_source_quote(source_url, default="Evidence from source")
                    )
                )

Note: This assumes the verdict object has or could implement methods to retrieve source-specific metadata. If not, consider enhancing the Verdict class to include this information.


76-76: Add newline at end of file.

Add a trailing newline at the end of the file to comply with PEP 8 guidelines and avoid potential issues with some tools.

    return response
+
🧰 Tools
🪛 Ruff (0.11.9)

76-76: No newline at end of file

Add trailing newline

(W292)

.coveragerc (1)

150-151: Consider adding title and timestamp to HTML reports.

For better documentation of your coverage reports, you might want to add a title and timestamp.

[html]
directory = coverage_html_report
+title = VeriFact Coverage Report
+show_contexts = true
src/tests/conftest.py (3)

3-3: Remove unused import.

The os module is imported but never used in the code.

-import os
🧰 Tools
🪛 Ruff (0.11.9)

3-3: os imported but unused

Remove unused import: os

(F401)


7-8: Consider checking for .env file existence before loading.

The current implementation assumes a .env file exists, but it might be better to check for its existence first to avoid potential warnings or errors.

# Load environment variables from .env file
-load_dotenv()
+load_dotenv(verbose=False, raise_error_if_not_found=False)

31-41: Consider adding fixtures for OpenRouter API key.

Since the PR objective mentions OpenRouter integration, consider adding a fixture for mocking the OpenRouter API key as well.

@pytest.fixture
def mock_openai_key(monkeypatch):
    """Mock the OPENAI_API_KEY environment variable."""
    monkeypatch.setenv("OPENAI_API_KEY", "mock-api-key-for-testing")


@pytest.fixture
def mock_search_key(monkeypatch):
    """Mock the SEARCH_API_KEY environment variable."""
    monkeypatch.setenv("SEARCH_API_KEY", "mock-search-api-key-for-testing")
+
+
+@pytest.fixture
+def mock_openrouter_key(monkeypatch):
+    """Mock the OPENROUTER_API_KEY environment variable."""
+    monkeypatch.setenv("OPENROUTER_API_KEY", "mock-openrouter-key-for-testing")
run_tests_with_coverage.sh (2)

21-21: Consider making dependency installation optional.

The script currently installs dependencies every time it runs, which could be slow. Consider adding a flag to skip installation when dependencies are already present.

-# Install required packages
-$PYTHON -m pip install --user pytest pytest-cov pytest-asyncio python-dotenv pydantic openai openai-agents>=0.0.15
+# Install required packages if needed
+if [ "$1" != "--skip-install" ]; then
+    echo "Installing dependencies..."
+    $PYTHON -m pip install --user pytest pytest-cov pytest-asyncio python-dotenv pydantic openai openai-agents>=0.0.15
+else
+    echo "Skipping dependency installation..."
+fi

24-24: Consider adding test selection parameters.

The script always runs all tests, but it might be useful to allow running specific test categories (unit, integration, e2e) using the pytest markers defined in conftest.py.

-# Run tests with coverage
-$PYTHON -m pytest src/tests/ --cov=src --cov-report=term --cov-report=html -v
+# Run tests with coverage
+TEST_TYPE=${1:-all}
+
+case "$TEST_TYPE" in
+    unit)
+        echo "Running unit tests only..."
+        $PYTHON -m pytest src/tests/ -m unit --cov=src --cov-report=term --cov-report=html -v
+        ;;
+    integration)
+        echo "Running integration tests only..."
+        $PYTHON -m pytest src/tests/ -m integration --cov=src --cov-report=term --cov-report=html -v
+        ;;
+    e2e)
+        echo "Running end-to-end tests only..."
+        $PYTHON -m pytest src/tests/ -m e2e --cov=src --cov-report=term --cov-report=html -v
+        ;;
+    all|*)
+        echo "Running all tests..."
+        $PYTHON -m pytest src/tests/ --cov=src --cov-report=term --cov-report=html -v
+        ;;
+esac
src/tests/fixtures/evidence.py (1)

10-10: Consider adding a negative relevance test case.

All current evidence has positive relevance scores (0.75-0.95). Consider adding at least one evidence item with a low relevance score (e.g., 0.3) to test how the system handles marginally relevant evidence.

src/tests/fixtures/claims.py (1)

110-120: Fix whitespace in blank lines.

There are blank lines containing whitespace characters, which should be removed to follow style guidelines.

-    
+
-    
+
-    
+
🧰 Tools
🪛 Ruff (0.11.9)

110-110: Blank line contains whitespace

Remove whitespace from blank line

(W293)


115-115: Blank line contains whitespace

Remove whitespace from blank line

(W293)


120-120: Blank line contains whitespace

Remove whitespace from blank line

(W293)

src/tests/fixtures/verdicts.py (1)

47-61: Consider adding testing for confidence thresholds.

The verdict confidence scores are generally high (0.75-0.99). Consider adding at least one verdict with a low confidence score (e.g., 0.55) to test how the system handles less confident assertions.

src/tests/integration/test_end_to_end.py (3)

32-39: Consider increasing the timeout for end-to-end tests.

The current timeout of 60 seconds might be too short for complex queries, especially when using external APIs that might have variable response times.

config = ManagerConfig(
    min_checkworthiness=0.5,
    max_claims=2,  # Limit to 2 claims to reduce API costs
    evidence_per_claim=2,  # Limit to 2 evidence pieces per claim
-    timeout_seconds=60.0,
+    timeout_seconds=120.0,  # Increase timeout for more reliable e2e tests
    enable_fallbacks=True,
    retry_attempts=1,
    raise_exceptions=True,
    include_debug_info=False,
)

116-130: Add a more specific assertion for the no claims scenario.

The test for text with no claims should more clearly distinguish between "no claims detected" and "claims detected but unverifiable". Currently, the test accepts either empty results or unverifiable claims.

# Verify results - should be empty or have unverifiable claims
if results:
    assert all(isinstance(verdict, Verdict) for verdict in results)
    assert all(verdict.verdict == "unverifiable" for verdict in results)
+    # Check that explanations mention why the claim is unverifiable
+    for verdict in results:
+        assert len(verdict.explanation) > 0
+        assert len(verdict.sources) > 0
else:
+    # If no claims were detected, that's also acceptable
    assert results == []

166-172: Strengthen assertion for unverifiable claims.

For unverifiable claims, simply checking that the confidence is below 0.7 is not as specific as it could be. Consider asserting that the explanation contains relevant terms.

# The statement should be unverifiable or have low confidence
for verdict in results:
    assert verdict.verdict == "unverifiable" or verdict.confidence < 0.7
    assert len(verdict.explanation) > 0
    assert len(verdict.sources) > 0
+    # For unverifiable claims, check that the explanation mentions uncertainty
+    if verdict.verdict == "unverifiable":
+        assert any(term in verdict.explanation.lower() for term in ["unverifiable", "uncertain", "insufficient", "evidence", "proof"])
src/tests/test_claim_edge_cases.py (3)

4-4: Clean up unused imports.

The imports for AsyncMock and MagicMock are not used in this file.

-from unittest.mock import AsyncMock, MagicMock, patch
+from unittest.mock import patch
🧰 Tools
🪛 Ruff (0.11.9)

4-4: unittest.mock.AsyncMock imported but unused

Remove unused import

(F401)


4-4: unittest.mock.MagicMock imported but unused

Remove unused import

(F401)


16-25: Add a docstring to the __init__ method.

The MockRunnerResult class is missing a docstring for the __init__ method, which is flagged by the linter.

class MockRunnerResult:
    """Mock for the result returned by Runner.run()."""
    
    def __init__(self, output_data):
+        """Initialize with the given output data.
+
+        Args:
+            output_data: The data to be returned by final_output_as
+        """
        self.output_data = output_data
        self.final_output = str(output_data)
🧰 Tools
🪛 Ruff (0.11.9)

18-18: Blank line contains whitespace

Remove whitespace from blank line

(W293)


19-19: Missing docstring in __init__

(D107)


22-22: Blank line contains whitespace

Remove whitespace from blank line

(W293)


109-115: Use a more specific exception type.

The test uses a generic Exception type which is flagged by the linter. Consider using a more specific exception type for better error handling and clarity.

# Run the pipeline and expect an exception
-with pytest.raises(Exception):
+with pytest.raises(Exception, match="Evidence gathering error"):
    await manager.run(SAMPLE_TEXTS[0])
🧰 Tools
🪛 Ruff (0.11.9)

110-110: Do not assert blind exception: Exception

(B017)


112-112: Blank line contains whitespace

Remove whitespace from blank line

(W293)

src/tests/test_fixtures.py (3)

3-3: Clean up unused imports.

The pytest import isn't used directly in this file.

-import pytest
🧰 Tools
🪛 Ruff (0.11.9)

3-3: pytest imported but unused

Remove unused import: pytest

(F401)


53-75: Fix unused loop variables and whitespace issues.

The category variable is not used in the loop bodies, which could lead to confusion. Consider renaming to indicate it's unused.

# Check that all evidence items are instances of the Evidence class
-for category, evidence_list in ALL_EVIDENCE.items():
+for _category, evidence_list in ALL_EVIDENCE.items():
    for evidence_item in evidence_list:
        assert isinstance(evidence_item, Evidence)

# ...

# Check evidence attributes
-for category, evidence_list in ALL_EVIDENCE.items():
+for _category, evidence_list in ALL_EVIDENCE.items():
    for evidence in evidence_list:
        assert isinstance(evidence.content, str)
        assert isinstance(evidence.source, str)
        assert 0.0 <= evidence.relevance <= 1.0
        assert evidence.stance in ["supporting", "contradicting", "neutral"]
🧰 Tools
🪛 Ruff (0.11.9)

56-56: Loop control variable category not used within loop body

Rename unused category to _category

(B007)


59-59: Blank line contains whitespace

Remove whitespace from blank line

(W293)


62-62: Trailing whitespace

Remove trailing whitespace

(W291)


63-63: Trailing whitespace

Remove trailing whitespace

(W291)


64-64: Trailing whitespace

Remove trailing whitespace

(W291)


67-67: Blank line contains whitespace

Remove whitespace from blank line

(W293)


69-69: Loop control variable category not used within loop body

Rename unused category to _category

(B007)


102-128: Refactor to reduce cyclomatic complexity.

The test_fixture_relationships method has a high cyclomatic complexity (9, over the limit of 8), which makes it harder to maintain and understand. Consider simplifying by extracting the repeated verification logic.

def test_fixture_relationships():
    """Test the relationships between fixtures."""
+    def verify_fixture_relationship(claim_list, evidence_dict, verdict_list, test_claim_text):
+        """Helper to verify relationships between fixtures for a specific claim text."""
+        # Check for evidence
+        found_evidence = False
+        for category in evidence_dict:
+            if test_claim_text.lower() in category.lower():
+                found_evidence = True
+                break
+        assert found_evidence, f"No evidence found for key claim: {test_claim_text}"
+        
+        # Check for verdict
+        found_verdict = False
+        for verdict in verdict_list:
+            if test_claim_text == verdict.claim:
+                found_verdict = True
+                break
+        assert found_verdict, f"No verdict found for key claim: {test_claim_text}"

    # Key claim to check
    key_claim_text = "The United States has the largest military budget in the world."
    
    # Check that there's evidence and verdict for the key claim
    for claim in POLITICAL_CLAIMS:
        if claim.text == key_claim_text:
+            verify_fixture_relationship(POLITICAL_CLAIMS, POLITICAL_EVIDENCE, POLITICAL_VERDICTS, key_claim_text)
-        # Find evidence that might match this claim
-        found_evidence = False
-        for category, evidence_list in POLITICAL_EVIDENCE.items():
-            if claim.text.lower() in category.lower():
-                found_evidence = True
-                break
-        
-        # Not all claims need evidence, but at least some should have it
-        if claim.text == "The United States has the largest military budget in the world.":
-            assert found_evidence, f"No evidence found for key claim: {claim.text}"
-    
-    # Check that there's a verdict for at least some claims
-    for claim in POLITICAL_CLAIMS:
-        found_verdict = False
-        for verdict in POLITICAL_VERDICTS:
-            if claim.text == verdict.claim:
-                found_verdict = True
-                break
-        
-        # Not all claims need verdicts, but at least some should have them
-        if claim.text == "The United States has the largest military budget in the world.":
-            assert found_verdict, f"No verdict found for key claim: {claim.text}"
🧰 Tools
🪛 Ruff (0.11.9)

108-108: Loop control variable evidence_list not used within loop body

Rename unused evidence_list to _evidence_list

(B007)


112-112: Blank line contains whitespace

Remove whitespace from blank line

(W293)


116-116: Blank line contains whitespace

Remove whitespace from blank line

(W293)


124-124: Blank line contains whitespace

Remove whitespace from blank line

(W293)

🪛 GitHub Check: Codacy Static Code Analysis

[warning] 102-102: src/tests/test_fixtures.py#L102
Method test_fixture_relationships has a cyclomatic complexity of 9 (limit is 8)

src/tests/test_verifact_manager.py (4)

4-4: Clean up unused imports.

The imports for AsyncMock and MagicMock are not used directly in this file.

-from unittest.mock import AsyncMock, MagicMock, patch
+from unittest.mock import patch
🧰 Tools
🪛 Ruff (0.11.9)

4-4: unittest.mock.AsyncMock imported but unused

Remove unused import

(F401)


4-4: unittest.mock.MagicMock imported but unused

Remove unused import

(F401)


7-9: Remove unused class imports.

The imports for Claim, Evidence, and Verdict classes are not used in this file.

-from src.verifact_agents.claim_detector import Claim
-from src.verifact_agents.evidence_hunter import Evidence
-from src.verifact_agents.verdict_writer import Verdict
🧰 Tools
🪛 Ruff (0.11.9)

7-7: src.verifact_agents.claim_detector.Claim imported but unused

Remove unused import: src.verifact_agents.claim_detector.Claim

(F401)


8-8: src.verifact_agents.evidence_hunter.Evidence imported but unused

Remove unused import: src.verifact_agents.evidence_hunter.Evidence

(F401)


9-9: src.verifact_agents.verdict_writer.Verdict imported but unused

Remove unused import: src.verifact_agents.verdict_writer.Verdict

(F401)


16-26: Add a docstring to the __init__ method.

The MockRunnerResult class is missing a docstring for the __init__ method, which is flagged by the linter.

class MockRunnerResult:
    """Mock for the result returned by Runner.run()."""
    
    def __init__(self, output_data):
+        """Initialize with the given output data.
+
+        Args:
+            output_data: The data to be returned by final_output_as
+        """
        self.output_data = output_data
        self.final_output = str(output_data)
🧰 Tools
🪛 Ruff (0.11.9)

18-18: Blank line contains whitespace

Remove whitespace from blank line

(W293)


19-19: Missing docstring in __init__

(D107)


22-22: Blank line contains whitespace

Remove whitespace from blank line

(W293)


196-201: Use a more specific exception type.

The test uses a generic Exception type which is flagged by the linter. Consider using a more specific exception type or adding a match pattern for better error handling and clarity.

# Call the method and expect an exception
-with pytest.raises(Exception):
+with pytest.raises(Exception, match="Test error"):
    await manager.run(SAMPLE_TEXTS[0])
🧰 Tools
🪛 Ruff (0.11.9)

197-197: Do not assert blind exception: Exception

(B017)


199-199: Blank line contains whitespace

Remove whitespace from blank line

(W293)

src/tests/integration/test_pipeline_integration.py (2)

28-50: Add docstrings to the mock agent class constructors.

Each of the mock agent classes is missing a docstring for the __init__ method.

class MockClaimDetector:
    """Mock claim detector for testing."""

    def __init__(self, claims_to_return):
+        """Initialize mock claim detector.
+
+        Args:
+            claims_to_return: Claims to be returned by detect_claims
+        """
        self.claims_to_return = claims_to_return
        self.detect_claims = AsyncMock(return_value=claims_to_return)

Similar docstrings should be added to MockEvidenceHunter.__init__ and MockVerdictWriter.__init__.

🧰 Tools
🪛 Ruff (0.11.9)

31-31: Missing docstring in __init__

(D107)


39-39: Missing docstring in __init__

(D107)


47-47: Missing docstring in __init__

(D107)


85-96: Enhance mock side effect for clearer agent identification.

The current implementation checks for agent name using __dict__.get('name'), which is fragile. Consider a more robust approach for identifying agents in the mock side effect.

# Configure mock to return different results for different agent calls
def mock_runner_side_effect(*args, **kwargs):
    # Check which agent is being called based on the agent object (first arg)
    agent = args[0]
-    if agent.__dict__.get('name') == 'ClaimDetector':
+    agent_name = getattr(agent, 'name', None)
+    if agent_name == 'ClaimDetector':
        return MockRunnerResult(sample_claims)
-    elif agent.__dict__.get('name') == 'EvidenceHunter':
+    elif agent_name == 'EvidenceHunter':
        return MockRunnerResult(sample_evidence)
-    elif agent.__dict__.get('name') == 'VerdictWriter':
+    elif agent_name == 'VerdictWriter':
        return MockRunnerResult(sample_verdict)
    return MockRunnerResult([])
src/tests/test_claim_types.py (3)

21-31: Missing __init__ docstring & small helper feature

MockRunnerResult lacks a docstring for its constructor and a convenient __repr__ useful while debugging failed assertions.

     def __init__(self, output_data):
-        self.output_data = output_data
-        self.final_output = str(output_data)
+        """Store mocked agent output."""
+        self.output_data = output_data
+        self.final_output = str(output_data)
+
+    def __repr__(self) -> str:  # helpful in pytest tracebacks
+        return f"MockRunnerResult({self.output_data!r})"
🧰 Tools
🪛 Ruff (0.11.9)

23-23: Blank line contains whitespace

Remove whitespace from blank line

(W293)


24-24: Missing docstring in __init__

(D107)


27-27: Blank line contains whitespace

Remove whitespace from blank line

(W293)


239-277: Side-effect parsing is brittle & drives cyclomatic complexity

mock_runner_side_effect leans on string slicing (split("Claim to investigate: ")) to discover which claim is processed.
This ties the test tightly to prompt text formatting and inflates complexity (Codacy > 8).
Consider mapping by index instead, or adding a lightweight helper:

def claim_from_prompt(prompt: str) -> str:
    return prompt.split("Claim to investigate: ")[1].split("\n", 1)[0]

then reuse everywhere. This reduces duplication and clarifies intent.

🧰 Tools
🪛 Ruff (0.11.9)

243-243: Blank line contains whitespace

Remove whitespace from blank line

(W293)


246-246: Blank line contains whitespace

Remove whitespace from blank line

(W293)


252-252: Blank line contains whitespace

Remove whitespace from blank line

(W293)

🪛 GitHub Check: Codacy Static Code Analysis

[warning] 249-249: src/tests/test_claim_types.py#L249
Method test_mixed_claim_types.mock_runner_side_effect has a cyclomatic complexity of 10 (limit is 8)


1-297: Trailing-whitespace & blank-line nitpicks

Ruff flagged many W293 instances. Running ruff --fix or black will trim these automatically—recommended to keep the diff small and CI green.
No functional impact, but it keeps the repo consistent.

🧰 Tools
🪛 Ruff (0.11.9)

4-4: unittest.mock.AsyncMock imported but unused

Remove unused import

(F401)


4-4: unittest.mock.MagicMock imported but unused

Remove unused import

(F401)


9-9: src.verifact_agents.verdict_writer.Verdict imported but unused

Remove unused import: src.verifact_agents.verdict_writer.Verdict

(F401)


14-14: src.tests.fixtures.verdicts.POLITICAL_VERDICTS imported but unused

Remove unused import

(F401)


15-15: src.tests.fixtures.verdicts.HEALTH_VERDICTS imported but unused

Remove unused import

(F401)


16-16: src.tests.fixtures.verdicts.SCIENCE_VERDICTS imported but unused

Remove unused import

(F401)


23-23: Blank line contains whitespace

Remove whitespace from blank line

(W293)


24-24: Missing docstring in __init__

(D107)


27-27: Blank line contains whitespace

Remove whitespace from blank line

(W293)


62-62: Blank line contains whitespace

Remove whitespace from blank line

(W293)


71-71: Blank line contains whitespace

Remove whitespace from blank line

(W293)


75-75: Blank line contains whitespace

Remove whitespace from blank line

(W293)


96-96: Blank line contains whitespace

Remove whitespace from blank line

(W293)


107-107: Blank line contains whitespace

Remove whitespace from blank line

(W293)


118-118: Blank line contains whitespace

Remove whitespace from blank line

(W293)


120-120: Blank line contains whitespace

Remove whitespace from blank line

(W293)


123-123: Blank line contains whitespace

Remove whitespace from blank line

(W293)


139-139: Blank line contains whitespace

Remove whitespace from blank line

(W293)


150-150: Blank line contains whitespace

Remove whitespace from blank line

(W293)


152-152: Blank line contains whitespace

Remove whitespace from blank line

(W293)


155-155: Blank line contains whitespace

Remove whitespace from blank line

(W293)


171-171: Blank line contains whitespace

Remove whitespace from blank line

(W293)


182-182: Blank line contains whitespace

Remove whitespace from blank line

(W293)


184-184: Blank line contains whitespace

Remove whitespace from blank line

(W293)


187-187: Blank line contains whitespace

Remove whitespace from blank line

(W293)


203-203: Blank line contains whitespace

Remove whitespace from blank line

(W293)


214-214: Blank line contains whitespace

Remove whitespace from blank line

(W293)


216-216: Blank line contains whitespace

Remove whitespace from blank line

(W293)


219-219: Blank line contains whitespace

Remove whitespace from blank line

(W293)


237-237: Blank line contains whitespace

Remove whitespace from blank line

(W293)


243-243: Blank line contains whitespace

Remove whitespace from blank line

(W293)


246-246: Blank line contains whitespace

Remove whitespace from blank line

(W293)


252-252: Blank line contains whitespace

Remove whitespace from blank line

(W293)


278-278: Blank line contains whitespace

Remove whitespace from blank line

(W293)


280-280: Blank line contains whitespace

Remove whitespace from blank line

(W293)


283-283: Blank line contains whitespace

Remove whitespace from blank line

(W293)


286-286: Blank line contains whitespace

Remove whitespace from blank line

(W293)


293-293: Blank line contains whitespace

Remove whitespace from blank line

(W293)

🪛 GitHub Check: Codacy Static Code Analysis

[warning] 249-249: src/tests/test_claim_types.py#L249
Method test_mixed_claim_types.mock_runner_side_effect has a cyclomatic complexity of 10 (limit is 8)

src/tests/test_performance.py (2)

3-13: Remove unused imports & keep only what’s required

AsyncMock, MagicMock, asyncio, Claim, Evidence, Verdict are currently unused after refactor.
Prune them to satisfy Ruff F401 and to clarify dependencies.

-from unittest.mock import AsyncMock, MagicMock, patch
-import asyncio
+from unittest.mock import AsyncMock, patch
@@
-from src.verifact_agents.claim_detector import Claim
-from src.verifact_agents.evidence_hunter import Evidence
-from src.verifact_agents.verdict_writer import Verdict
🧰 Tools
🪛 Ruff (0.11.9)

4-4: unittest.mock.AsyncMock imported but unused

Remove unused import

(F401)


4-4: unittest.mock.MagicMock imported but unused

Remove unused import

(F401)


5-5: asyncio imported but unused

Remove unused import: asyncio

(F401)


9-9: src.verifact_agents.claim_detector.Claim imported but unused

Remove unused import: src.verifact_agents.claim_detector.Claim

(F401)


10-10: src.verifact_agents.evidence_hunter.Evidence imported but unused

Remove unused import: src.verifact_agents.evidence_hunter.Evidence

(F401)


11-11: src.verifact_agents.verdict_writer.Verdict imported but unused

Remove unused import: src.verifact_agents.verdict_writer.Verdict

(F401)


148-152: High cyclomatic complexity in nested side-effect lambdas

Codacy flagged complexity > 8. Extract reusable helper functions (e.g., make_side_effect(claims, evidence_map, verdicts, delays)) to cut each test’s side-effect length dramatically and improve readability.

No immediate failure, but worth refactoring.

src/tests/test_complex_claims.py (4)

3-11: Tidy header imports

AsyncMock, MagicMock, and SAMPLE_TEXTS are unused.
Remove them to satisfy Ruff F401.

-from unittest.mock import AsyncMock, MagicMock, patch
+from unittest.mock import AsyncMock, patch
@@
-from src.tests.fixtures.claims import SAMPLE_TEXTS
🧰 Tools
🪛 Ruff (0.11.9)

4-4: unittest.mock.AsyncMock imported but unused

Remove unused import

(F401)


4-4: unittest.mock.MagicMock imported but unused

Remove unused import

(F401)


11-11: src.tests.fixtures.claims.SAMPLE_TEXTS imported but unused

Remove unused import: src.tests.fixtures.claims.SAMPLE_TEXTS

(F401)


14-24: Docstring & __repr__ for debug-friendly MockRunnerResult

Same recommendation as in test_claim_types.py—adding a constructor docstring and __repr__ aids debugging when assertions fail.

🧰 Tools
🪛 Ruff (0.11.9)

16-16: Blank line contains whitespace

Remove whitespace from blank line

(W293)


17-17: Missing docstring in __init__

(D107)


20-20: Blank line contains whitespace

Remove whitespace from blank line

(W293)


46-51: compound_claim is never used

The variable defined at L46-51 isn’t referenced afterwards, producing Ruff F841.
Either drop the assignment or incorporate it into assertions.

-    compound_claim = Claim(
-        text="The Earth is round and orbits the Sun, which is a star that is 93 million miles away.",
-        context=0.9,
-    )
🧰 Tools
🪛 Ruff (0.11.9)

47-47: Local variable compound_claim is assigned to but never used

Remove assignment to unused variable compound_claim

(F841)


51-51: Blank line contains whitespace

Remove whitespace from blank line

(W293)


240-359: Oversized test function (86 lines) — improve maintainability

test_claim_with_mixed_verdicts significantly exceeds the 50-line guideline, complicating future changes.
Splitting into:

  • a helper that assembles mocks for mixed verdicts
  • smaller assertions blocks

would meet the guideline and enhance readability.

🧰 Tools
🪛 Ruff (0.11.9)

249-249: Blank line contains whitespace

Remove whitespace from blank line

(W293)


277-277: Blank line contains whitespace

Remove whitespace from blank line

(W293)


309-309: Blank line contains whitespace

Remove whitespace from blank line

(W293)


315-315: Blank line contains whitespace

Remove whitespace from blank line

(W293)


329-329: Blank line contains whitespace

Remove whitespace from blank line

(W293)


331-331: Blank line contains whitespace

Remove whitespace from blank line

(W293)


339-339: Blank line contains whitespace

Remove whitespace from blank line

(W293)


342-342: Blank line contains whitespace

Remove whitespace from blank line

(W293)


348-348: Blank line contains whitespace

Remove whitespace from blank line

(W293)

🪛 GitHub Check: Codacy Static Code Analysis

[warning] 240-240: src/tests/test_complex_claims.py#L240
Method test_claim_with_mixed_verdicts has 86 lines of code (limit is 50)

src/verifact_manager.py (1)

105-118: Type-hint modernisation & clearer optionality

  1. Prefer built-in generics (list, dict) over typing.List / Dict.
  2. Use | instead of Optional[...] where appropriate.
-async def _gather_evidence(self, claims: List[Claim]) -> List[tuple[Claim, Optional[List[Evidence]]]]:
+async def _gather_evidence(
+    self,
+    claims: list[Claim],
+) -> list[tuple[Claim, list[Evidence] | None]]:

This quiets the Ruff UPxxx warnings and improves readability.

🧰 Tools
🪛 Ruff (0.11.9)

105-105: Use list instead of List for type annotation

Replace with list

(UP006)


105-105: Use list instead of List for type annotation

Replace with list

(UP006)


105-105: Use X | Y for type annotations

Convert to X | Y

(UP007)


105-105: Use list instead of List for type annotation

Replace with list

(UP006)


110-110: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)

src/tests/test_error_recovery.py (1)

3-10: Prune unused imports to satisfy Ruff F401

AsyncMock, MagicMock, Claim, and Evidence are never referenced in this module.

-from unittest.mock import AsyncMock, MagicMock, patch
+from unittest.mock import patch
-
-from src.verifact_agents.claim_detector import Claim
-from src.verifact_agents.evidence_hunter import Evidence
+# Claim / Evidence not needed here – they are generated via MockDataFactory
🧰 Tools
🪛 Ruff (0.11.9)

5-5: unittest.mock.AsyncMock imported but unused

Remove unused import

(F401)


5-5: unittest.mock.MagicMock imported but unused

Remove unused import

(F401)


8-8: src.verifact_agents.claim_detector.Claim imported but unused

Remove unused import: src.verifact_agents.claim_detector.Claim

(F401)


9-9: src.verifact_agents.evidence_hunter.Evidence imported but unused

Remove unused import: src.verifact_agents.evidence_hunter.Evidence

(F401)

src/tests/test_data_flow.py (1)

3-5: Unused testing helpers – tidy up

AsyncMock, MagicMock, and call are imported but never used. Removing them avoids Ruff F401 warnings.

-from unittest.mock import AsyncMock, MagicMock, patch, call
+from unittest.mock import patch
🧰 Tools
🪛 Ruff (0.11.9)

4-4: unittest.mock.AsyncMock imported but unused

Remove unused import

(F401)


4-4: unittest.mock.MagicMock imported but unused

Remove unused import

(F401)


4-4: unittest.mock.call imported but unused

Remove unused import

(F401)

src/tests/utils/performance_utils.py (2)

3-6: Remove unused imports & update typing for 3.10+

asyncio and Tuple aren’t referenced. Also regular generics are preferred.

-import time
-import asyncio
-from typing import Dict, List, Any, Callable, Awaitable, Optional, Tuple
+import time
+from typing import Any, Callable, Awaitable

Update downstream list / dict hints similarly to silence Ruff UP0xx warnings.

🧰 Tools
🪛 Ruff (0.11.9)

4-4: asyncio imported but unused

Remove unused import: asyncio

(F401)


5-5: Import from collections.abc instead: Callable, Awaitable

Import from collections.abc

(UP035)


5-5: typing.Dict is deprecated, use dict instead

(UP035)


5-5: typing.List is deprecated, use list instead

(UP035)


5-5: typing.Tuple is deprecated, use tuple instead

(UP035)


5-5: typing.Tuple imported but unused

Remove unused import: typing.Tuple

(F401)


145-147: Parallelism efficiency may exceed 1.0

sequential_time / total_duration_ms can be > 1 when overhead or async gaps exist.
Clamp the metric to [0.0, 1.0] for clearer interpretation:

-parallelism_efficiency = sequential_time / total_duration_ms if total_duration_ms > 0 else 0
+eff = sequential_time / total_duration_ms if total_duration_ms > 0 else 0
+parallelism_efficiency = min(1.0, eff)
🧰 Tools
🪛 Ruff (0.11.9)

147-147: Blank line contains whitespace

Remove whitespace from blank line

(W293)

src/verifact_manager_openrouter.py (3)

212-218: Throttle parallel evidence gathering to avoid hitting rate limits

asyncio.gather fires one task per claim; a long article could generate
hundreds of concurrent requests, quickly exhausting model rate limits or your
quota. Consider bounding concurrency with a semaphore:

-        tasks = [self._gather_evidence_for_claim(claim) for claim in claims]
+        sem = asyncio.Semaphore(8)  # tune for your quota
+
+        async def _wrapped(claim):
+            async with sem:
+                return await self._gather_evidence_for_claim(claim)
+
+        tasks = [_wrapped(claim) for claim in claims]
🧰 Tools
🪛 Ruff (0.11.9)

212-212: Use list instead of List for type annotation

Replace with list

(UP006)


212-212: Use list instead of List for type annotation

Replace with list

(UP006)


212-212: Use X | Y for type annotations

Convert to X | Y

(UP007)


212-212: Use list instead of List for type annotation

Replace with list

(UP006)


218-218: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)


287-290: Remove pointless f-string

f"Skipping claim - no evidence found" contains no interpolations and triggers
Ruff rule F541.

-                logger.warning(f"Skipping claim - no evidence found")
+                logger.warning("Skipping claim - no evidence found")
🧰 Tools
🪛 Ruff (0.11.9)

289-289: f-string without any placeholders

Remove extraneous f prefix

(F541)


40-58: Address Ruff type-annotation deprecations and missing docstrings

Ruff flags multiple uses of typing.List/typing.Optional as well as missing
class/module docstrings. Migrating to PEP-585 (list, dict, tuple, |)
keeps the codebase modern and silences noise for contributors running Ruff.

No diff included because changes are mechanical and span the file.

🧰 Tools
🪛 Ruff (0.11.9)

58-58: Use list instead of List for type annotation

Replace with list

(UP006)

src/tests/utils/mock_data_factory.py (3)

3-6: Remove unused typing imports

Tuple and Union are imported but never used, which produces Ruff F401
warnings and slows static analysis.

-from typing import List, Dict, Any, Optional, Tuple, Union
+from typing import List, Dict, Any, Optional
🧰 Tools
🪛 Ruff (0.11.9)

4-4: typing.List is deprecated, use list instead

(UP035)


4-4: typing.Dict is deprecated, use dict instead

(UP035)


4-4: typing.Tuple is deprecated, use tuple instead

(UP035)


4-4: typing.Tuple imported but unused

Remove unused import

(F401)


4-4: typing.Union imported but unused

Remove unused import

(F401)


373-390: Deduplicate MockRunnerResult definitions across the test suite

MockRunnerResult already exists in several test modules; introducing a third
version increases maintenance risk. Export a single reusable class from this
factory (or a tests/mocks.py) and import it where needed.

-        class MockRunnerResult:
+        class MockRunnerResult:
             ...
-        return MockRunnerResult(output_data)
+        return MockRunnerResult(output_data)
+
+# TODO: move MockRunnerResult to a shared mocks module and import everywhere
🧰 Tools
🪛 Ruff (0.11.9)

375-375: Blank line contains whitespace

Remove whitespace from blank line

(W293)


378-378: Blank line contains whitespace

Remove whitespace from blank line

(W293)


386-386: Blank line contains whitespace

Remove whitespace from blank line

(W293)


389-389: Blank line contains whitespace

Remove whitespace from blank line

(W293)


186-241: create_verdict exceeds 50 lines and mixes concerns

Besides construction, it contains business logic that maps verdict types to
confidence ranges and explanations. Extract those mappings to small helper
functions/constants to lower method length (52 > 50) and complexity (14 > 8).

This also enables re-use if other tests need consistent confidence logic.

🧰 Tools
🪛 Ruff (0.11.9)

188-188: Use X | Y for type annotations

Convert to X | Y

(UP007)


189-189: Use X | Y for type annotations

Convert to X | Y

(UP007)


189-189: Use list instead of List for type annotation

Replace with list

(UP006)


190-190: Use X | Y for type annotations

Convert to X | Y

(UP007)


191-191: Use X | Y for type annotations

Convert to X | Y

(UP007)


192-192: Use X | Y for type annotations

Convert to X | Y

(UP007)


193-193: Use X | Y for type annotations

Convert to X | Y

(UP007)


193-193: Use list instead of List for type annotation

Replace with list

(UP006)


196-196: Blank line contains whitespace

Remove whitespace from blank line

(W293)


204-204: Blank line contains whitespace

Remove whitespace from blank line

(W293)


211-211: Blank line contains whitespace

Remove whitespace from blank line

(W293)


221-221: Blank line contains whitespace

Remove whitespace from blank line

(W293)


224-224: Blank line contains whitespace

Remove whitespace from blank line

(W293)


234-234: Blank line contains whitespace

Remove whitespace from blank line

(W293)

🪛 GitHub Check: Codacy Static Code Analysis

[warning] 186-186: src/tests/utils/mock_data_factory.py#L186
Method create_verdict has 52 lines of code (limit is 50)


[warning] 186-186: src/tests/utils/mock_data_factory.py#L186
Method create_verdict has a cyclomatic complexity of 14 (limit is 8)


[warning] 210-210: src/tests/utils/mock_data_factory.py#L210
Standard pseudo-random generators are not suitable for security/cryptographic purposes.


[warning] 214-214: src/tests/utils/mock_data_factory.py#L214
Standard pseudo-random generators are not suitable for security/cryptographic purposes.


[warning] 218-218: src/tests/utils/mock_data_factory.py#L218
Standard pseudo-random generators are not suitable for security/cryptographic purposes.


[warning] 220-220: src/tests/utils/mock_data_factory.py#L220
Standard pseudo-random generators are not suitable for security/cryptographic purposes.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bacc1da and 3219a29.

📒 Files selected for processing (32)
  • .coveragerc (1 hunks)
  • .env.template (1 hunks)
  • =0.0.15 (1 hunks)
  • run_tests_with_coverage.sh (1 hunks)
  • src/api/factcheck.py (1 hunks)
  • src/tests/README.md (1 hunks)
  • src/tests/conftest.py (1 hunks)
  • src/tests/fixtures/README.md (1 hunks)
  • src/tests/fixtures/__init__.py (1 hunks)
  • src/tests/fixtures/claims.py (1 hunks)
  • src/tests/fixtures/evidence.py (1 hunks)
  • src/tests/fixtures/verdicts.py (1 hunks)
  • src/tests/integration/__init__.py (1 hunks)
  • src/tests/integration/test_end_to_end.py (1 hunks)
  • src/tests/integration/test_pipeline_integration.py (1 hunks)
  • src/tests/test_claim_edge_cases.py (1 hunks)
  • src/tests/test_claim_types.py (1 hunks)
  • src/tests/test_complex_claims.py (1 hunks)
  • src/tests/test_data_flow.py (1 hunks)
  • src/tests/test_error_recovery.py (1 hunks)
  • src/tests/test_fixtures.py (1 hunks)
  • src/tests/test_performance.py (1 hunks)
  • src/tests/test_verifact_manager.py (1 hunks)
  • src/tests/utils/__init__.py (1 hunks)
  • src/tests/utils/mock_data_factory.py (1 hunks)
  • src/tests/utils/performance_utils.py (1 hunks)
  • src/utils/openrouter_config.py (1 hunks)
  • src/verifact_agents/claim_detector.py (1 hunks)
  • src/verifact_agents/evidence_hunter.py (1 hunks)
  • src/verifact_agents/verdict_writer.py (1 hunks)
  • src/verifact_manager.py (8 hunks)
  • src/verifact_manager_openrouter.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
src/tests/test_claim_types.py (2)
src/verifact_manager.py (3)
  • VerifactManager (39-151)
  • ManagerConfig (26-36)
  • run (43-81)
src/tests/test_verifact_manager.py (3)
  • MockRunnerResult (16-25)
  • final_output_as (23-25)
  • manager (29-41)
src/tests/utils/mock_data_factory.py (5)
src/tests/integration/test_pipeline_integration.py (2)
  • MockRunnerResult (16-25)
  • final_output_as (23-25)
src/tests/test_claim_edge_cases.py (2)
  • MockRunnerResult (16-25)
  • final_output_as (23-25)
src/tests/test_claim_types.py (2)
  • MockRunnerResult (21-30)
  • final_output_as (28-30)
src/tests/test_complex_claims.py (2)
  • MockRunnerResult (14-23)
  • final_output_as (21-23)
src/tests/test_verifact_manager.py (2)
  • MockRunnerResult (16-25)
  • final_output_as (23-25)
src/verifact_manager.py (10)
src/verifact_agents/claim_detector.py (1)
  • Claim (8-12)
src/verifact_manager_openrouter.py (9)
  • Claim (40-43)
  • Evidence (45-50)
  • Verdict (52-58)
  • run (84-113)
  • _detect_claims (133-169)
  • _gather_evidence_for_claim (171-210)
  • _gather_evidence (212-228)
  • _generate_verdict_for_claim (230-281)
  • _generate_all_verdicts (283-296)
src/verifact_agents/evidence_hunter.py (1)
  • Evidence (9-15)
src/verifact_agents/verdict_writer.py (1)
  • Verdict (9-18)
src/tests/integration/test_pipeline_integration.py (1)
  • final_output_as (23-25)
src/tests/test_claim_edge_cases.py (1)
  • final_output_as (23-25)
src/tests/test_claim_types.py (1)
  • final_output_as (28-30)
src/tests/test_complex_claims.py (1)
  • final_output_as (21-23)
src/tests/test_verifact_manager.py (1)
  • final_output_as (23-25)
src/tests/utils/mock_data_factory.py (1)
  • final_output_as (387-388)
🪛 Ruff (0.11.9)
src/verifact_agents/evidence_hunter.py

2-2: from src.utils.openrouter_config import * used; unable to detect undefined names

(F403)

src/verifact_agents/verdict_writer.py

2-2: from src.utils.openrouter_config import * used; unable to detect undefined names

(F403)

src/verifact_agents/claim_detector.py

2-2: from src.utils.openrouter_config import * used; unable to detect undefined names

(F403)

src/api/factcheck.py

4-4: asyncio imported but unused

Remove unused import: asyncio

(F401)


16-16: Missing docstring in public function

(D103)


76-76: No newline at end of file

Add trailing newline

(W292)

src/tests/conftest.py

3-3: os imported but unused

Remove unused import: os

(F401)

src/tests/fixtures/claims.py

110-110: Blank line contains whitespace

Remove whitespace from blank line

(W293)


115-115: Blank line contains whitespace

Remove whitespace from blank line

(W293)


120-120: Blank line contains whitespace

Remove whitespace from blank line

(W293)

src/tests/test_verifact_manager.py

4-4: unittest.mock.AsyncMock imported but unused

Remove unused import

(F401)


4-4: unittest.mock.MagicMock imported but unused

Remove unused import

(F401)


7-7: src.verifact_agents.claim_detector.Claim imported but unused

Remove unused import: src.verifact_agents.claim_detector.Claim

(F401)


8-8: src.verifact_agents.evidence_hunter.Evidence imported but unused

Remove unused import: src.verifact_agents.evidence_hunter.Evidence

(F401)


9-9: src.verifact_agents.verdict_writer.Verdict imported but unused

Remove unused import: src.verifact_agents.verdict_writer.Verdict

(F401)


18-18: Blank line contains whitespace

Remove whitespace from blank line

(W293)


19-19: Missing docstring in __init__

(D107)


22-22: Blank line contains whitespace

Remove whitespace from blank line

(W293)


51-51: Blank line contains whitespace

Remove whitespace from blank line

(W293)


54-54: Blank line contains whitespace

Remove whitespace from blank line

(W293)


70-70: Blank line contains whitespace

Remove whitespace from blank line

(W293)


73-73: Blank line contains whitespace

Remove whitespace from blank line

(W293)


89-89: Blank line contains whitespace

Remove whitespace from blank line

(W293)


92-92: Blank line contains whitespace

Remove whitespace from blank line

(W293)


111-111: Blank line contains whitespace

Remove whitespace from blank line

(W293)


114-114: Blank line contains whitespace

Remove whitespace from blank line

(W293)


131-131: Blank line contains whitespace

Remove whitespace from blank line

(W293)


137-137: Blank line contains whitespace

Remove whitespace from blank line

(W293)


139-139: Blank line contains whitespace

Remove whitespace from blank line

(W293)


142-142: Blank line contains whitespace

Remove whitespace from blank line

(W293)


159-159: Blank line contains whitespace

Remove whitespace from blank line

(W293)


163-163: Blank line contains whitespace

Remove whitespace from blank line

(W293)


166-166: Blank line contains whitespace

Remove whitespace from blank line

(W293)


180-180: Blank line contains whitespace

Remove whitespace from blank line

(W293)


183-183: Blank line contains whitespace

Remove whitespace from blank line

(W293)


195-195: Blank line contains whitespace

Remove whitespace from blank line

(W293)


197-197: Do not assert blind exception: Exception

(B017)


199-199: Blank line contains whitespace

Remove whitespace from blank line

(W293)


213-213: Blank line contains whitespace

Remove whitespace from blank line

(W293)


215-215: Do not assert blind exception: Exception

(B017)


217-217: Blank line contains whitespace

Remove whitespace from blank line

(W293)


232-232: Blank line contains whitespace

Remove whitespace from blank line

(W293)


236-236: Blank line contains whitespace

Remove whitespace from blank line

(W293)


238-238: Do not assert blind exception: Exception

(B017)


240-240: Blank line contains whitespace

Remove whitespace from blank line

(W293)

src/tests/integration/test_pipeline_integration.py

10-10: src.verifact_agents.claim_detector.Claim imported but unused

Remove unused import: src.verifact_agents.claim_detector.Claim

(F401)


11-11: src.verifact_agents.evidence_hunter.Evidence imported but unused

Remove unused import: src.verifact_agents.evidence_hunter.Evidence

(F401)


13-13: src.verifact_manager.ManagerConfig imported but unused

Remove unused import: src.verifact_manager.ManagerConfig

(F401)


19-19: Missing docstring in __init__

(D107)


31-31: Missing docstring in __init__

(D107)


39-39: Missing docstring in __init__

(D107)


47-47: Missing docstring in __init__

(D107)

src/tests/test_claim_types.py

4-4: unittest.mock.AsyncMock imported but unused

Remove unused import

(F401)


4-4: unittest.mock.MagicMock imported but unused

Remove unused import

(F401)


9-9: src.verifact_agents.verdict_writer.Verdict imported but unused

Remove unused import: src.verifact_agents.verdict_writer.Verdict

(F401)


14-14: src.tests.fixtures.verdicts.POLITICAL_VERDICTS imported but unused

Remove unused import

(F401)


15-15: src.tests.fixtures.verdicts.HEALTH_VERDICTS imported but unused

Remove unused import

(F401)


16-16: src.tests.fixtures.verdicts.SCIENCE_VERDICTS imported but unused

Remove unused import

(F401)


23-23: Blank line contains whitespace

Remove whitespace from blank line

(W293)


24-24: Missing docstring in __init__

(D107)


27-27: Blank line contains whitespace

Remove whitespace from blank line

(W293)


62-62: Blank line contains whitespace

Remove whitespace from blank line

(W293)


71-71: Blank line contains whitespace

Remove whitespace from blank line

(W293)


75-75: Blank line contains whitespace

Remove whitespace from blank line

(W293)


96-96: Blank line contains whitespace

Remove whitespace from blank line

(W293)


107-107: Blank line contains whitespace

Remove whitespace from blank line

(W293)


118-118: Blank line contains whitespace

Remove whitespace from blank line

(W293)


120-120: Blank line contains whitespace

Remove whitespace from blank line

(W293)


123-123: Blank line contains whitespace

Remove whitespace from blank line

(W293)


139-139: Blank line contains whitespace

Remove whitespace from blank line

(W293)


150-150: Blank line contains whitespace

Remove whitespace from blank line

(W293)


152-152: Blank line contains whitespace

Remove whitespace from blank line

(W293)


155-155: Blank line contains whitespace

Remove whitespace from blank line

(W293)


171-171: Blank line contains whitespace

Remove whitespace from blank line

(W293)


182-182: Blank line contains whitespace

Remove whitespace from blank line

(W293)


184-184: Blank line contains whitespace

Remove whitespace from blank line

(W293)


187-187: Blank line contains whitespace

Remove whitespace from blank line

(W293)


203-203: Blank line contains whitespace

Remove whitespace from blank line

(W293)


214-214: Blank line contains whitespace

Remove whitespace from blank line

(W293)


216-216: Blank line contains whitespace

Remove whitespace from blank line

(W293)


219-219: Blank line contains whitespace

Remove whitespace from blank line

(W293)


237-237: Blank line contains whitespace

Remove whitespace from blank line

(W293)


243-243: Blank line contains whitespace

Remove whitespace from blank line

(W293)


246-246: Blank line contains whitespace

Remove whitespace from blank line

(W293)


252-252: Blank line contains whitespace

Remove whitespace from blank line

(W293)


278-278: Blank line contains whitespace

Remove whitespace from blank line

(W293)


280-280: Blank line contains whitespace

Remove whitespace from blank line

(W293)


283-283: Blank line contains whitespace

Remove whitespace from blank line

(W293)


286-286: Blank line contains whitespace

Remove whitespace from blank line

(W293)


293-293: Blank line contains whitespace

Remove whitespace from blank line

(W293)

src/tests/test_claim_edge_cases.py

4-4: unittest.mock.AsyncMock imported but unused

Remove unused import

(F401)


4-4: unittest.mock.MagicMock imported but unused

Remove unused import

(F401)


18-18: Blank line contains whitespace

Remove whitespace from blank line

(W293)


19-19: Missing docstring in __init__

(D107)


22-22: Blank line contains whitespace

Remove whitespace from blank line

(W293)


50-50: Blank line contains whitespace

Remove whitespace from blank line

(W293)


53-53: Blank line contains whitespace

Remove whitespace from blank line

(W293)


65-65: Blank line contains whitespace

Remove whitespace from blank line

(W293)


71-71: Blank line contains whitespace

Remove whitespace from blank line

(W293)


78-78: Blank line contains whitespace

Remove whitespace from blank line

(W293)


80-80: Blank line contains whitespace

Remove whitespace from blank line

(W293)


83-83: Blank line contains whitespace

Remove whitespace from blank line

(W293)


95-95: Blank line contains whitespace

Remove whitespace from blank line

(W293)


99-99: Blank line contains whitespace

Remove whitespace from blank line

(W293)


106-106: Blank line contains whitespace

Remove whitespace from blank line

(W293)


108-108: Blank line contains whitespace

Remove whitespace from blank line

(W293)


110-110: Do not assert blind exception: Exception

(B017)


112-112: Blank line contains whitespace

Remove whitespace from blank line

(W293)


124-124: Blank line contains whitespace

Remove whitespace from blank line

(W293)


133-133: Blank line contains whitespace

Remove whitespace from blank line

(W293)


137-137: Blank line contains whitespace

Remove whitespace from blank line

(W293)


145-145: Blank line contains whitespace

Remove whitespace from blank line

(W293)


147-147: Blank line contains whitespace

Remove whitespace from blank line

(W293)


150-150: Blank line contains whitespace

Remove whitespace from blank line

(W293)


164-164: Blank line contains whitespace

Remove whitespace from blank line

(W293)


180-180: Blank line contains whitespace

Remove whitespace from blank line

(W293)


189-189: Blank line contains whitespace

Remove whitespace from blank line

(W293)


193-193: Blank line contains whitespace

Remove whitespace from blank line

(W293)


201-201: Blank line contains whitespace

Remove whitespace from blank line

(W293)


203-203: Blank line contains whitespace

Remove whitespace from blank line

(W293)


206-206: Blank line contains whitespace

Remove whitespace from blank line

(W293)


220-220: Blank line contains whitespace

Remove whitespace from blank line

(W293)


230-230: Blank line contains whitespace

Remove whitespace from blank line

(W293)


238-238: Blank line contains whitespace

Remove whitespace from blank line

(W293)


242-242: Blank line contains whitespace

Remove whitespace from blank line

(W293)


250-250: Blank line contains whitespace

Remove whitespace from blank line

(W293)


252-252: Blank line contains whitespace

Remove whitespace from blank line

(W293)


255-255: Blank line contains whitespace

Remove whitespace from blank line

(W293)


269-269: Blank line contains whitespace

Remove whitespace from blank line

(W293)


273-273: Blank line contains whitespace

Remove whitespace from blank line

(W293)


279-279: Blank line contains whitespace

Remove whitespace from blank line

(W293)


292-292: Blank line contains whitespace

Remove whitespace from blank line

(W293)


294-294: Blank line contains whitespace

Remove whitespace from blank line

(W293)


296-296: Do not assert blind exception: Exception

(B017)


298-298: Blank line contains whitespace

Remove whitespace from blank line

(W293)

src/tests/test_complex_claims.py

4-4: unittest.mock.AsyncMock imported but unused

Remove unused import

(F401)


4-4: unittest.mock.MagicMock imported but unused

Remove unused import

(F401)


11-11: src.tests.fixtures.claims.SAMPLE_TEXTS imported but unused

Remove unused import: src.tests.fixtures.claims.SAMPLE_TEXTS

(F401)


16-16: Blank line contains whitespace

Remove whitespace from blank line

(W293)


17-17: Missing docstring in __init__

(D107)


20-20: Blank line contains whitespace

Remove whitespace from blank line

(W293)


47-47: Local variable compound_claim is assigned to but never used

Remove assignment to unused variable compound_claim

(F841)


51-51: Blank line contains whitespace

Remove whitespace from blank line

(W293)


59-59: Blank line contains whitespace

Remove whitespace from blank line

(W293)


87-87: Blank line contains whitespace

Remove whitespace from blank line

(W293)


119-119: Blank line contains whitespace

Remove whitespace from blank line

(W293)


125-125: Blank line contains whitespace

Remove whitespace from blank line

(W293)


142-142: Blank line contains whitespace

Remove whitespace from blank line

(W293)


144-144: Blank line contains whitespace

Remove whitespace from blank line

(W293)


147-147: Blank line contains whitespace

Remove whitespace from blank line

(W293)


152-152: Blank line contains whitespace

Remove whitespace from blank line

(W293)


168-168: Blank line contains whitespace

Remove whitespace from blank line

(W293)


184-184: Blank line contains whitespace

Remove whitespace from blank line

(W293)


202-202: Blank line contains whitespace

Remove whitespace from blank line

(W293)


208-208: Blank line contains whitespace

Remove whitespace from blank line

(W293)


222-222: Blank line contains whitespace

Remove whitespace from blank line

(W293)


224-224: Blank line contains whitespace

Remove whitespace from blank line

(W293)


227-227: Blank line contains whitespace

Remove whitespace from blank line

(W293)


232-232: Blank line contains whitespace

Remove whitespace from blank line

(W293)


249-249: Blank line contains whitespace

Remove whitespace from blank line

(W293)


277-277: Blank line contains whitespace

Remove whitespace from blank line

(W293)


309-309: Blank line contains whitespace

Remove whitespace from blank line

(W293)


315-315: Blank line contains whitespace

Remove whitespace from blank line

(W293)


329-329: Blank line contains whitespace

Remove whitespace from blank line

(W293)


331-331: Blank line contains whitespace

Remove whitespace from blank line

(W293)


339-339: Blank line contains whitespace

Remove whitespace from blank line

(W293)


342-342: Blank line contains whitespace

Remove whitespace from blank line

(W293)


348-348: Blank line contains whitespace

Remove whitespace from blank line

(W293)

src/tests/test_performance.py

4-4: unittest.mock.AsyncMock imported but unused

Remove unused import

(F401)


4-4: unittest.mock.MagicMock imported but unused

Remove unused import

(F401)


5-5: asyncio imported but unused

Remove unused import: asyncio

(F401)


9-9: src.verifact_agents.claim_detector.Claim imported but unused

Remove unused import: src.verifact_agents.claim_detector.Claim

(F401)


10-10: src.verifact_agents.evidence_hunter.Evidence imported but unused

Remove unused import: src.verifact_agents.evidence_hunter.Evidence

(F401)


11-11: src.verifact_agents.verdict_writer.Verdict imported but unused

Remove unused import: src.verifact_agents.verdict_writer.Verdict

(F401)

src/tests/test_data_flow.py

4-4: unittest.mock.AsyncMock imported but unused

Remove unused import

(F401)


4-4: unittest.mock.MagicMock imported but unused

Remove unused import

(F401)


4-4: unittest.mock.call imported but unused

Remove unused import

(F401)


32-32: Blank line contains whitespace

Remove whitespace from blank line

(W293)


33-33: Missing docstring in __init__

(D107)


40-40: Blank line contains whitespace

Remove whitespace from blank line

(W293)


61-61: Blank line contains whitespace

Remove whitespace from blank line

(W293)


66-66: Blank line contains whitespace

Remove whitespace from blank line

(W293)


73-73: Blank line contains whitespace

Remove whitespace from blank line

(W293)


77-77: Blank line contains whitespace

Remove whitespace from blank line

(W293)


82-82: Blank line contains whitespace

Remove whitespace from blank line

(W293)


86-86: Blank line contains whitespace

Remove whitespace from blank line

(W293)


90-90: Blank line contains whitespace

Remove whitespace from blank line

(W293)


92-92: Blank line contains whitespace

Remove whitespace from blank line

(W293)


96-96: Blank line contains whitespace

Remove whitespace from blank line

(W293)


99-99: Blank line contains whitespace

Remove whitespace from blank line

(W293)


104-104: Blank line contains whitespace

Remove whitespace from blank line

(W293)


109-109: Blank line contains whitespace

Remove whitespace from blank line

(W293)


115-115: Blank line contains whitespace

Remove whitespace from blank line

(W293)


117-117: Loop control variable i not used within loop body

(B007)


132-132: Blank line contains whitespace

Remove whitespace from blank line

(W293)


137-137: Blank line contains whitespace

Remove whitespace from blank line

(W293)


154-154: Blank line contains whitespace

Remove whitespace from blank line

(W293)


160-160: Blank line contains whitespace

Remove whitespace from blank line

(W293)


177-177: Blank line contains whitespace

Remove whitespace from blank line

(W293)


179-179: Blank line contains whitespace

Remove whitespace from blank line

(W293)


182-182: Blank line contains whitespace

Remove whitespace from blank line

(W293)


197-197: Blank line contains whitespace

Remove whitespace from blank line

(W293)


204-204: Blank line contains whitespace

Remove whitespace from blank line

(W293)


212-212: Blank line contains whitespace

Remove whitespace from blank line

(W293)


218-218: Blank line contains whitespace

Remove whitespace from blank line

(W293)


227-227: Blank line contains whitespace

Remove whitespace from blank line

(W293)


229-229: Blank line contains whitespace

Remove whitespace from blank line

(W293)


232-232: Blank line contains whitespace

Remove whitespace from blank line

(W293)


237-237: Blank line contains whitespace

Remove whitespace from blank line

(W293)


240-240: Blank line contains whitespace

Remove whitespace from blank line

(W293)


246-246: Blank line contains whitespace

Remove whitespace from blank line

(W293)


261-261: Blank line contains whitespace

Remove whitespace from blank line

(W293)


293-293: Blank line contains whitespace

Remove whitespace from blank line

(W293)


299-299: Blank line contains whitespace

Remove whitespace from blank line

(W293)


305-305: Function definition does not bind loop variable test_verdict

(B023)


307-307: Blank line contains whitespace

Remove whitespace from blank line

(W293)


309-309: Blank line contains whitespace

Remove whitespace from blank line

(W293)


312-312: Blank line contains whitespace

Remove whitespace from blank line

(W293)


331-331: Blank line contains whitespace

Remove whitespace from blank line

(W293)


336-336: Blank line contains whitespace

Remove whitespace from blank line

(W293)


345-345: Blank line contains whitespace

Remove whitespace from blank line

(W293)


359-359: Blank line contains whitespace

Remove whitespace from blank line

(W293)


371-371: Blank line contains whitespace

Remove whitespace from blank line

(W293)


373-373: Blank line contains whitespace

Remove whitespace from blank line

(W293)


376-376: Blank line contains whitespace

Remove whitespace from blank line

(W293)


380-380: Blank line contains whitespace

Remove whitespace from blank line

(W293)

src/tests/test_fixtures.py

3-3: pytest imported but unused

Remove unused import: pytest

(F401)


38-38: Blank line contains whitespace

Remove whitespace from blank line

(W293)


41-41: Trailing whitespace

Remove trailing whitespace

(W291)


42-42: Trailing whitespace

Remove trailing whitespace

(W291)


43-43: Trailing whitespace

Remove trailing whitespace

(W291)


46-46: Blank line contains whitespace

Remove whitespace from blank line

(W293)


56-56: Loop control variable category not used within loop body

Rename unused category to _category

(B007)


59-59: Blank line contains whitespace

Remove whitespace from blank line

(W293)


62-62: Trailing whitespace

Remove trailing whitespace

(W291)


63-63: Trailing whitespace

Remove trailing whitespace

(W291)


64-64: Trailing whitespace

Remove trailing whitespace

(W291)


67-67: Blank line contains whitespace

Remove whitespace from blank line

(W293)


69-69: Loop control variable category not used within loop body

Rename unused category to _category

(B007)


82-82: Blank line contains whitespace

Remove whitespace from blank line

(W293)


85-85: Trailing whitespace

Remove trailing whitespace

(W291)


86-86: Trailing whitespace

Remove trailing whitespace

(W291)


87-87: Trailing whitespace

Remove trailing whitespace

(W291)


90-90: Blank line contains whitespace

Remove whitespace from blank line

(W293)


108-108: Loop control variable evidence_list not used within loop body

Rename unused evidence_list to _evidence_list

(B007)


112-112: Blank line contains whitespace

Remove whitespace from blank line

(W293)


116-116: Blank line contains whitespace

Remove whitespace from blank line

(W293)


124-124: Blank line contains whitespace

Remove whitespace from blank line

(W293)

src/tests/utils/mock_data_factory.py

4-4: typing.List is deprecated, use list instead

(UP035)


4-4: typing.Dict is deprecated, use dict instead

(UP035)


4-4: typing.Tuple is deprecated, use tuple instead

(UP035)


4-4: typing.Tuple imported but unused

Remove unused import

(F401)


4-4: typing.Union imported but unused

Remove unused import

(F401)


52-52: Use X | Y for type annotations

Convert to X | Y

(UP007)


53-53: Use X | Y for type annotations

Convert to X | Y

(UP007)


54-54: Use X | Y for type annotations

Convert to X | Y

(UP007)


55-55: Use X | Y for type annotations

Convert to X | Y

(UP007)


59-59: Blank line contains whitespace

Remove whitespace from blank line

(W293)


66-66: Blank line contains whitespace

Remove whitespace from blank line

(W293)


74-74: Blank line contains whitespace

Remove whitespace from blank line

(W293)


100-100: Blank line contains whitespace

Remove whitespace from blank line

(W293)


103-103: Blank line contains whitespace

Remove whitespace from blank line

(W293)


109-109: Use X | Y for type annotations

Convert to X | Y

(UP007)


110-110: Use X | Y for type annotations

Convert to X | Y

(UP007)


111-111: Use X | Y for type annotations

Convert to X | Y

(UP007)


112-112: Use X | Y for type annotations

Convert to X | Y

(UP007)


113-113: Use X | Y for type annotations

Convert to X | Y

(UP007)


117-117: Blank line contains whitespace

Remove whitespace from blank line

(W293)


125-125: Blank line contains whitespace

Remove whitespace from blank line

(W293)


133-133: Blank line contains whitespace

Remove whitespace from blank line

(W293)


141-141: Blank line contains whitespace

Remove whitespace from blank line

(W293)


148-148: Blank line contains whitespace

Remove whitespace from blank line

(W293)


154-154: Use X | Y for type annotations

Convert to X | Y

(UP007)


158-158: Use list instead of List for type annotation

Replace with list

(UP006)


160-160: Blank line contains whitespace

Remove whitespace from blank line

(W293)


166-166: Blank line contains whitespace

Remove whitespace from blank line

(W293)


172-172: Blank line contains whitespace

Remove whitespace from blank line

(W293)


178-178: Blank line contains whitespace

Remove whitespace from blank line

(W293)


182-182: Blank line contains whitespace

Remove whitespace from blank line

(W293)


188-188: Use X | Y for type annotations

Convert to X | Y

(UP007)


189-189: Use X | Y for type annotations

Convert to X | Y

(UP007)


189-189: Use list instead of List for type annotation

Replace with list

(UP006)


190-190: Use X | Y for type annotations

Convert to X | Y

(UP007)


191-191: Use X | Y for type annotations

Convert to X | Y

(UP007)


192-192: Use X | Y for type annotations

Convert to X | Y

(UP007)


193-193: Use X | Y for type annotations

Convert to X | Y

(UP007)


193-193: Use list instead of List for type annotation

Replace with list

(UP006)


196-196: Blank line contains whitespace

Remove whitespace from blank line

(W293)


204-204: Blank line contains whitespace

Remove whitespace from blank line

(W293)


211-211: Blank line contains whitespace

Remove whitespace from blank line

(W293)


221-221: Blank line contains whitespace

Remove whitespace from blank line

(W293)


224-224: Blank line contains whitespace

Remove whitespace from blank line

(W293)


234-234: Blank line contains whitespace

Remove whitespace from blank line

(W293)


249-249: Use dict instead of Dict for type annotation

Replace with dict

(UP006)


251-251: Blank line contains whitespace

Remove whitespace from blank line

(W293)


262-262: Blank line contains whitespace

Remove whitespace from blank line

(W293)


269-269: Blank line contains whitespace

Remove whitespace from blank line

(W293)


277-277: Blank line contains whitespace

Remove whitespace from blank line

(W293)


281-281: Blank line contains whitespace

Remove whitespace from blank line

(W293)


284-284: Loop control variable i not used within loop body

(B007)


287-287: Blank line contains whitespace

Remove whitespace from blank line

(W293)


291-291: Blank line contains whitespace

Remove whitespace from blank line

(W293)


295-295: Blank line contains whitespace

Remove whitespace from blank line

(W293)


304-304: Blank line contains whitespace

Remove whitespace from blank line

(W293)


308-308: Blank line contains whitespace

Remove whitespace from blank line

(W293)


318-318: Blank line contains whitespace

Remove whitespace from blank line

(W293)


323-323: Blank line contains whitespace

Remove whitespace from blank line

(W293)


326-326: Blank line contains whitespace

Remove whitespace from blank line

(W293)


331-331: Blank line contains whitespace

Remove whitespace from blank line

(W293)


332-332: Loop control variable i not used within loop body

(B007)


336-336: Blank line contains whitespace

Remove whitespace from blank line

(W293)


340-340: Blank line contains whitespace

Remove whitespace from blank line

(W293)


344-344: Blank line contains whitespace

Remove whitespace from blank line

(W293)


352-352: Blank line contains whitespace

Remove whitespace from blank line

(W293)


356-356: Blank line contains whitespace

Remove whitespace from blank line

(W293)


360-360: Blank line contains whitespace

Remove whitespace from blank line

(W293)


364-364: Blank line contains whitespace

Remove whitespace from blank line

(W293)


375-375: Blank line contains whitespace

Remove whitespace from blank line

(W293)


378-378: Blank line contains whitespace

Remove whitespace from blank line

(W293)


386-386: Blank line contains whitespace

Remove whitespace from blank line

(W293)


389-389: Blank line contains whitespace

Remove whitespace from blank line

(W293)

src/tests/test_error_recovery.py

5-5: unittest.mock.AsyncMock imported but unused

Remove unused import

(F401)


5-5: unittest.mock.MagicMock imported but unused

Remove unused import

(F401)


8-8: src.verifact_agents.claim_detector.Claim imported but unused

Remove unused import: src.verifact_agents.claim_detector.Claim

(F401)


9-9: src.verifact_agents.evidence_hunter.Evidence imported but unused

Remove unused import: src.verifact_agents.evidence_hunter.Evidence

(F401)


56-56: Blank line contains whitespace

Remove whitespace from blank line

(W293)


62-62: Blank line contains whitespace

Remove whitespace from blank line

(W293)


74-74: Blank line contains whitespace

Remove whitespace from blank line

(W293)


76-76: Blank line contains whitespace

Remove whitespace from blank line

(W293)


79-79: Blank line contains whitespace

Remove whitespace from blank line

(W293)


94-94: Blank line contains whitespace

Remove whitespace from blank line

(W293)


99-99: Blank line contains whitespace

Remove whitespace from blank line

(W293)


117-117: Blank line contains whitespace

Remove whitespace from blank line

(W293)


119-119: Blank line contains whitespace

Remove whitespace from blank line

(W293)


122-122: Blank line contains whitespace

Remove whitespace from blank line

(W293)


126-126: Blank line contains whitespace

Remove whitespace from blank line

(W293)


141-141: Blank line contains whitespace

Remove whitespace from blank line

(W293)


145-145: Blank line contains whitespace

Remove whitespace from blank line

(W293)


152-152: Blank line contains whitespace

Remove whitespace from blank line

(W293)


156-156: Blank line contains whitespace

Remove whitespace from blank line

(W293)


163-163: Blank line contains whitespace

Remove whitespace from blank line

(W293)


168-168: Blank line contains whitespace

Remove whitespace from blank line

(W293)


171-171: Blank line contains whitespace

Remove whitespace from blank line

(W293)


173-173: Blank line contains whitespace

Remove whitespace from blank line

(W293)


176-176: Blank line contains whitespace

Remove whitespace from blank line

(W293)


194-194: Blank line contains whitespace

Remove whitespace from blank line

(W293)


198-198: Blank line contains whitespace

Remove whitespace from blank line

(W293)


205-205: Blank line contains whitespace

Remove whitespace from blank line

(W293)


212-212: Blank line contains whitespace

Remove whitespace from blank line

(W293)


217-217: Blank line contains whitespace

Remove whitespace from blank line

(W293)


229-229: Blank line contains whitespace

Remove whitespace from blank line

(W293)


231-231: Blank line contains whitespace

Remove whitespace from blank line

(W293)


234-234: Blank line contains whitespace

Remove whitespace from blank line

(W293)


237-237: Blank line contains whitespace

Remove whitespace from blank line

(W293)


253-253: Blank line contains whitespace

Remove whitespace from blank line

(W293)


257-257: Blank line contains whitespace

Remove whitespace from blank line

(W293)


264-264: Blank line contains whitespace

Remove whitespace from blank line

(W293)


268-268: Blank line contains whitespace

Remove whitespace from blank line

(W293)


275-275: Blank line contains whitespace

Remove whitespace from blank line

(W293)


280-280: Blank line contains whitespace

Remove whitespace from blank line

(W293)


283-283: Blank line contains whitespace

Remove whitespace from blank line

(W293)


285-285: Blank line contains whitespace

Remove whitespace from blank line

(W293)


288-288: Blank line contains whitespace

Remove whitespace from blank line

(W293)


302-302: Blank line contains whitespace

Remove whitespace from blank line

(W293)


306-306: Blank line contains whitespace

Remove whitespace from blank line

(W293)


318-318: Blank line contains whitespace

Remove whitespace from blank line

(W293)


320-320: Blank line contains whitespace

Remove whitespace from blank line

(W293)


323-323: Blank line contains whitespace

Remove whitespace from blank line

(W293)


335-335: Blank line contains whitespace

Remove whitespace from blank line

(W293)


339-339: Blank line contains whitespace

Remove whitespace from blank line

(W293)


345-345: Blank line contains whitespace

Remove whitespace from blank line

(W293)


347-347: Blank line contains whitespace

Remove whitespace from blank line

(W293)


351-351: Blank line contains whitespace

Remove whitespace from blank line

(W293)

src/verifact_manager_openrouter.py

15-15: typing.List is deprecated, use list instead

(UP035)


15-15: typing.Any imported but unused

Remove unused import: typing.Any

(F401)


58-58: Use list instead of List for type annotation

Replace with list

(UP006)


63-63: Use X | Y for type annotations

Convert to X | Y

(UP007)


71-71: Missing docstring in public class

(D101)


72-72: Missing docstring in __init__

(D107)


84-84: Use list instead of List for type annotation

Replace with list

(UP006)


133-133: Use list instead of List for type annotation

Replace with list

(UP006)


136-136: Blank line contains whitespace

Remove whitespace from blank line

(W293)


137-149: Use f-string instead of format call

Convert to f-string

(UP032)


140-140: Blank line contains whitespace

Remove whitespace from blank line

(W293)


144-144: Blank line contains whitespace

Remove whitespace from blank line

(W293)


146-146: Blank line contains whitespace

Remove whitespace from blank line

(W293)


150-150: Blank line contains whitespace

Remove whitespace from blank line

(W293)


152-152: Blank line contains whitespace

Remove whitespace from blank line

(W293)


160-160: Blank line contains whitespace

Remove whitespace from blank line

(W293)


171-171: Use list instead of List for type annotation

Replace with list

(UP006)


174-174: Blank line contains whitespace

Remove whitespace from blank line

(W293)


175-190: Use f-string instead of format call

Convert to f-string

(UP032)


177-177: Blank line contains whitespace

Remove whitespace from blank line

(W293)


179-179: Blank line contains whitespace

Remove whitespace from blank line

(W293)


185-185: Blank line contains whitespace

Remove whitespace from blank line

(W293)


187-187: Blank line contains whitespace

Remove whitespace from blank line

(W293)


191-191: Blank line contains whitespace

Remove whitespace from blank line

(W293)


193-193: Blank line contains whitespace

Remove whitespace from blank line

(W293)


201-201: Blank line contains whitespace

Remove whitespace from blank line

(W293)


212-212: Use list instead of List for type annotation

Replace with list

(UP006)


212-212: Use list instead of List for type annotation

Replace with list

(UP006)


212-212: Use X | Y for type annotations

Convert to X | Y

(UP007)


212-212: Use list instead of List for type annotation

Replace with list

(UP006)


218-218: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)


230-230: Use list instead of List for type annotation

Replace with list

(UP006)


233-233: Blank line contains whitespace

Remove whitespace from blank line

(W293)


238-238: Blank line contains whitespace

Remove whitespace from blank line

(W293)


239-257: Use f-string instead of format call

Convert to f-string

(UP032)


241-241: Blank line contains whitespace

Remove whitespace from blank line

(W293)


243-243: Blank line contains whitespace

Remove whitespace from blank line

(W293)


250-250: Blank line contains whitespace

Remove whitespace from blank line

(W293)


252-252: Blank line contains whitespace

Remove whitespace from blank line

(W293)


254-254: Blank line contains whitespace

Remove whitespace from blank line

(W293)


258-258: Blank line contains whitespace

Remove whitespace from blank line

(W293)


260-260: Blank line contains whitespace

Remove whitespace from blank line

(W293)


268-268: Blank line contains whitespace

Remove whitespace from blank line

(W293)


283-283: Use list instead of List for type annotation

Replace with list

(UP006)


283-283: Use X | Y for type annotations

Convert to X | Y

(UP007)


283-283: Use list instead of List for type annotation

Replace with list

(UP006)


283-283: Use list instead of List for type annotation

Replace with list

(UP006)


289-289: f-string without any placeholders

Remove extraneous f prefix

(F541)


304-304: Missing docstring in public function

(D103)

src/verifact_manager.py

13-13: from src.utils.openrouter_config import * used; unable to detect undefined names

(F403)


16-16: typing.List is deprecated, use list instead

(UP035)


16-16: typing.Dict is deprecated, use dict instead

(UP035)


16-16: typing.Dict imported but unused

Remove unused import

(F401)


16-16: typing.Any imported but unused

Remove unused import

(F401)


30-30: Use X | Y for type annotations

Convert to X | Y

(UP007)


43-43: Missing argument description in the docstring for run: query

(D417)


43-43: Use list instead of List for type annotation

Replace with list

(UP006)


83-83: Use list instead of List for type annotation

Replace with list

(UP006)


87-87: Use list instead of List for type annotation

Replace with list

(UP006)


90-90: Use list instead of List for type annotation

Replace with list

(UP006)


92-92: Use list instead of List for type annotation

Replace with list

(UP006)


103-103: Use list instead of List for type annotation

Replace with list

(UP006)


105-105: Use list instead of List for type annotation

Replace with list

(UP006)


105-105: Use list instead of List for type annotation

Replace with list

(UP006)


105-105: Use X | Y for type annotations

Convert to X | Y

(UP007)


105-105: Use list instead of List for type annotation

Replace with list

(UP006)


123-123: Use list instead of List for type annotation

Replace with list

(UP006)


135-135: Use list instead of List for type annotation

Replace with list

(UP006)


135-135: Use X | Y for type annotations

Convert to X | Y

(UP007)


135-135: Use list instead of List for type annotation

Replace with list

(UP006)


135-135: Use list instead of List for type annotation

Replace with list

(UP006)


141-141: f-string without any placeholders

Remove extraneous f prefix

(F541)

src/tests/utils/performance_utils.py

4-4: asyncio imported but unused

Remove unused import: asyncio

(F401)


5-5: Import from collections.abc instead: Callable, Awaitable

Import from collections.abc

(UP035)


5-5: typing.Dict is deprecated, use dict instead

(UP035)


5-5: typing.List is deprecated, use list instead

(UP035)


5-5: typing.Tuple is deprecated, use tuple instead

(UP035)


5-5: typing.Tuple imported but unused

Remove unused import: typing.Tuple

(F401)


15-15: Blank line contains whitespace

Remove whitespace from blank line

(W293)


18-18: Use dict instead of Dict for type annotation

Replace with dict

(UP006)


24-24: Blank line contains whitespace

Remove whitespace from blank line

(W293)


37-37: Use list instead of List for type annotation

Replace with list

(UP006)


38-38: Blank line contains whitespace

Remove whitespace from blank line

(W293)


66-66: Blank line contains whitespace

Remove whitespace from blank line

(W293)


67-67: Missing docstring in __init__

(D107)


68-68: Use list instead of List for type annotation

Replace with list

(UP006)


71-71: Blank line contains whitespace

Remove whitespace from blank line

(W293)


77-77: Blank line contains whitespace

Remove whitespace from blank line

(W293)


82-82: Blank line contains whitespace

Remove whitespace from blank line

(W293)


86-86: Blank line contains whitespace

Remove whitespace from blank line

(W293)


87-87: Use X | Y for type annotations

Convert to X | Y

(UP007)


87-87: Use dict instead of Dict for type annotation

Replace with dict

(UP006)


94-94: Blank line contains whitespace

Remove whitespace from blank line

(W293)


106-106: Blank line contains whitespace

Remove whitespace from blank line

(W293)


112-112: Blank line contains whitespace

Remove whitespace from blank line

(W293)


114-114: Blank line contains whitespace

Remove whitespace from blank line

(W293)


119-119: Blank line contains whitespace

Remove whitespace from blank line

(W293)


121-121: Blank line contains whitespace

Remove whitespace from blank line

(W293)


123-123: Use dict instead of Dict for type annotation

Replace with dict

(UP006)


123-123: Use list instead of List for type annotation

Replace with list

(UP006)


128-128: Blank line contains whitespace

Remove whitespace from blank line

(W293)


133-133: Blank line contains whitespace

Remove whitespace from blank line

(W293)


137-137: Blank line contains whitespace

Remove whitespace from blank line

(W293)


139-139: Blank line contains whitespace

Remove whitespace from blank line

(W293)


143-143: Blank line contains whitespace

Remove whitespace from blank line

(W293)


147-147: Blank line contains whitespace

Remove whitespace from blank line

(W293)


167-167: Use list instead of List for type annotation

Replace with list

(UP006)


169-169: Use list instead of List for type annotation

Replace with list

(UP006)


171-171: Blank line contains whitespace

Remove whitespace from blank line

(W293)


176-176: Blank line contains whitespace

Remove whitespace from blank line

(W293)


181-181: Blank line contains whitespace

Remove whitespace from blank line

(W293)


186-186: Blank line contains whitespace

Remove whitespace from blank line

(W293)


191-191: Blank line contains whitespace

Remove whitespace from blank line

(W293)


193-193: Function definition does not bind loop variable tracker

(B023)


195-195: Function definition does not bind loop variable original_detect_claims

(B023)


198-198: Blank line contains whitespace

Remove whitespace from blank line

(W293)


200-200: Function definition does not bind loop variable tracker

(B023)


202-202: Function definition does not bind loop variable original_gather_evidence_for_claim

(B023)


205-205: Blank line contains whitespace

Remove whitespace from blank line

(W293)


207-207: Function definition does not bind loop variable tracker

(B023)


209-209: Function definition does not bind loop variable original_generate_verdict_for_claim

(B023)


213-213: Blank line contains whitespace

Remove whitespace from blank line

(W293)


218-218: Blank line contains whitespace

Remove whitespace from blank line

(W293)


227-227: Blank line contains whitespace

Remove whitespace from blank line

(W293)


230-230: Blank line contains whitespace

Remove whitespace from blank line

(W293)


234-234: Use list instead of List for type annotation

Replace with list

(UP006)


234-234: Use dict instead of Dict for type annotation

Replace with dict

(UP006)


236-236: Blank line contains whitespace

Remove whitespace from blank line

(W293)


239-239: Blank line contains whitespace

Remove whitespace from blank line

(W293)


245-245: Blank line contains whitespace

Remove whitespace from blank line

(W293)


252-252: Blank line contains whitespace

Remove whitespace from blank line

(W293)


291-291: Blank line contains whitespace

Remove whitespace from blank line

(W293)

🪛 LanguageTool
src/tests/fixtures/README.md

[uncategorized] ~11-~11: Loose punctuation mark.
Context: ...ganized by domain: - POLITICAL_CLAIMS: Claims related to politics and internat...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~22-~22: Loose punctuation mark.
Context: ...anized by topic: - POLITICAL_EVIDENCE: Evidence for political claims - `HEALTH...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~34-~34: Loose punctuation mark.
Context: ...nized by domain: - POLITICAL_VERDICTS: Verdicts for political claims - `HEALTH...

(UNLIKELY_OPENING_PUNCTUATION)

src/tests/README.md

[uncategorized] ~7-~7: Loose punctuation mark.
Context: ...oject. ## Test Structure - fixtures/: Test fixtures with sample data - `integ...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~77-~77: Loose punctuation mark.
Context: ... sample data for testing: - claims.py: Sample factual claims - evidence.py: ...

(UNLIKELY_OPENING_PUNCTUATION)

🪛 GitHub Check: Codacy Static Code Analysis
src/tests/test_claim_types.py

[warning] 249-249: src/tests/test_claim_types.py#L249
Method test_mixed_claim_types.mock_runner_side_effect has a cyclomatic complexity of 10 (limit is 8)

src/tests/test_complex_claims.py

[warning] 240-240: src/tests/test_complex_claims.py#L240
Method test_claim_with_mixed_verdicts has 86 lines of code (limit is 50)

src/tests/test_performance.py

[warning] 48-48: src/tests/test_performance.py#L48
Method test_performance_tracking.mock_runner_side_effect has a cyclomatic complexity of 10 (limit is 8)

src/tests/test_data_flow.py

[warning] 63-63: src/tests/test_data_flow.py#L63
Method test_data_flow_integrity.mock_runner_side_effect has a cyclomatic complexity of 10 (limit is 8)

src/tests/test_fixtures.py

[warning] 102-102: src/tests/test_fixtures.py#L102
Method test_fixture_relationships has a cyclomatic complexity of 9 (limit is 8)

src/tests/utils/mock_data_factory.py

[warning] 50-50: src/tests/utils/mock_data_factory.py#L50
Method create_claim has a cyclomatic complexity of 14 (limit is 8)


[warning] 71-71: src/tests/utils/mock_data_factory.py#L71
Standard pseudo-random generators are not suitable for security/cryptographic purposes.


[warning] 102-102: src/tests/utils/mock_data_factory.py#L102
Standard pseudo-random generators are not suitable for security/cryptographic purposes.


[warning] 130-130: src/tests/utils/mock_data_factory.py#L130
Standard pseudo-random generators are not suitable for security/cryptographic purposes.


[warning] 131-131: src/tests/utils/mock_data_factory.py#L131
Standard pseudo-random generators are not suitable for security/cryptographic purposes.


[warning] 132-132: src/tests/utils/mock_data_factory.py#L132
Standard pseudo-random generators are not suitable for security/cryptographic purposes.


[warning] 144-144: src/tests/utils/mock_data_factory.py#L144
Standard pseudo-random generators are not suitable for security/cryptographic purposes.


[warning] 177-177: src/tests/utils/mock_data_factory.py#L177
Standard pseudo-random generators are not suitable for security/cryptographic purposes.


[warning] 186-186: src/tests/utils/mock_data_factory.py#L186
Method create_verdict has 52 lines of code (limit is 50)


[warning] 186-186: src/tests/utils/mock_data_factory.py#L186
Method create_verdict has a cyclomatic complexity of 14 (limit is 8)


[warning] 210-210: src/tests/utils/mock_data_factory.py#L210
Standard pseudo-random generators are not suitable for security/cryptographic purposes.


[warning] 214-214: src/tests/utils/mock_data_factory.py#L214
Standard pseudo-random generators are not suitable for security/cryptographic purposes.


[warning] 218-218: src/tests/utils/mock_data_factory.py#L218
Standard pseudo-random generators are not suitable for security/cryptographic purposes.


[warning] 220-220: src/tests/utils/mock_data_factory.py#L220
Standard pseudo-random generators are not suitable for security/cryptographic purposes.


[warning] 244-244: src/tests/utils/mock_data_factory.py#L244
Method create_scenario has 99 lines of code (limit is 50)


[warning] 274-274: src/tests/utils/mock_data_factory.py#L274
Standard pseudo-random generators are not suitable for security/cryptographic purposes.


[warning] 289-289: src/tests/utils/mock_data_factory.py#L289
Standard pseudo-random generators are not suitable for security/cryptographic purposes.


[warning] 358-358: src/tests/utils/mock_data_factory.py#L358
Standard pseudo-random generators are not suitable for security/cryptographic purposes.

src/tests/test_error_recovery.py

[warning] 143-143: src/tests/test_error_recovery.py#L143
Method test_partial_evidence_failure.mock_runner_side_effect has a cyclomatic complexity of 11 (limit is 8)

src/tests/utils/performance_utils.py

[warning] 115-115: src/tests/utils/performance_utils.py#L115
Method generate_report has a cyclomatic complexity of 10 (limit is 8)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Codacy Static Code Analysis
🔇 Additional comments (21)
src/tests/integration/__init__.py (1)

1-1: Integration package initialization.
Adding an __init__.py with a descriptive docstring correctly marks the integration directory as a package for test discovery.

.env.template (2)

33-37: Search configuration.
The USE_SERPER toggle with a placeholder SERPER_API_KEY is clear and optional. No changes needed.


51-56: Chainlit UI configuration.
Defaults for the Chainlit UI (host, port, auth toggle) are reasonable, and leaving CHAINLIT_AUTH_SECRET blank by default is appropriate.

src/tests/utils/__init__.py (1)

1-1: Utilities package initialization.
Adding an __init__.py with a clear module docstring correctly enables test utilities to be importable.

src/tests/fixtures/__init__.py (1)

1-1: Fixtures package initialization.
The fixtures directory is now a proper package with a concise docstring, supporting organized import of test fixtures.

src/tests/fixtures/README.md (1)

1-103: Documentation is clear and well-structured
The README offers comprehensive guidance on fixture categories, usage examples, and extension procedures. It aligns well with the new test suite and will help contributors understand and expand the fixtures.

🧰 Tools
🪛 LanguageTool

[uncategorized] ~11-~11: Loose punctuation mark.
Context: ...ganized by domain: - POLITICAL_CLAIMS: Claims related to politics and internat...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~22-~22: Loose punctuation mark.
Context: ...anized by topic: - POLITICAL_EVIDENCE: Evidence for political claims - `HEALTH...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~34-~34: Loose punctuation mark.
Context: ...nized by domain: - POLITICAL_VERDICTS: Verdicts for political claims - `HEALTH...

(UNLIKELY_OPENING_PUNCTUATION)

src/utils/openrouter_config.py (1)

27-28: Verify environment variable names for Agents SDK
Double-check that the OpenAI Agents SDK recognizes OPENAI_API_BASE for customizing the endpoint. If it expects a different variable name (e.g., OPENAI_API_BASE_URL), update accordingly to ensure the SDK routes through OpenRouter.

.coveragerc (1)

1-149: Comprehensive coverage configuration looks good.

The coverage configuration is well-structured and thorough, excluding appropriate patterns and special methods. This will result in more meaningful coverage metrics by focusing on your actual business logic.

src/tests/conftest.py (2)

11-15: Well-structured pytest markers.

The custom markers for different test types (e2e, integration, unit) are well-defined and provide good organization for test filtering.


18-29: Good logging setup with noise reduction.

The logging configuration is well-implemented, reducing noise from HTTP libraries while maintaining informative logs for the test code.

src/tests/README.md (2)

1-133: LGTM - Well-structured and comprehensive test documentation.

The README provides clear instructions for running different types of tests, generating coverage reports, and writing new tests. The documentation is thorough and will be valuable for anyone working with the test suite.

🧰 Tools
🪛 LanguageTool

[uncategorized] ~7-~7: Loose punctuation mark.
Context: ...oject. ## Test Structure - fixtures/: Test fixtures with sample data - `integ...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~77-~77: Loose punctuation mark.
Context: ... sample data for testing: - claims.py: Sample factual claims - evidence.py: ...

(UNLIKELY_OPENING_PUNCTUATION)


81-81:

Details

✅ Verification successful

Verify the fixtures README reference.

Please ensure that the referenced file fixtures/README.md exists. If it doesn't, either create it or update this reference.


🏁 Script executed:

#!/bin/bash
# Check if the fixtures README file exists
if [ -f "src/tests/fixtures/README.md" ]; then
  echo "Fixtures README exists"
else
  echo "Fixtures README does not exist"
fi

Length of output: 96


No action needed: fixtures README reference is valid

The file src/tests/fixtures/README.md exists, so the link is correct.

src/tests/fixtures/evidence.py (1)

1-171: LGTM - Well-structured evidence fixtures.

The evidence fixtures are comprehensive and well-organized, providing a good variety of test data across different domains (political, health, science, economic). The use of dictionaries mapping claims to evidence lists is a clean approach that will make these fixtures easy to use in tests.

src/tests/fixtures/claims.py (1)

1-126: LGTM - Well-structured claim fixtures.

The claim fixtures are comprehensive and well-organized, providing a good variety of test data across different domains. The SAMPLE_TEXTS array is particularly useful for testing claim detection from mixed content.

🧰 Tools
🪛 Ruff (0.11.9)

110-110: Blank line contains whitespace

Remove whitespace from blank line

(W293)


115-115: Blank line contains whitespace

Remove whitespace from blank line

(W293)


120-120: Blank line contains whitespace

Remove whitespace from blank line

(W293)

src/tests/fixtures/verdicts.py (1)

1-127: LGTM - Well-structured verdict fixtures.

The verdict fixtures are comprehensive and well-organized, with detailed explanations and realistic sources. The inclusion of different verdict types (true, false, partially true, unverifiable) provides good coverage for testing various scenarios.

src/tests/test_claim_edge_cases.py (1)

158-212: LGTM! Well-structured test for conflicting evidence.

The test case thoroughly verifies handling of conflicting evidence, with appropriate mocking of the pipeline's behavior and assertions for the expected outcomes.

🧰 Tools
🪛 Ruff (0.11.9)

164-164: Blank line contains whitespace

Remove whitespace from blank line

(W293)


180-180: Blank line contains whitespace

Remove whitespace from blank line

(W293)


189-189: Blank line contains whitespace

Remove whitespace from blank line

(W293)


193-193: Blank line contains whitespace

Remove whitespace from blank line

(W293)


201-201: Blank line contains whitespace

Remove whitespace from blank line

(W293)


203-203: Blank line contains whitespace

Remove whitespace from blank line

(W293)


206-206: Blank line contains whitespace

Remove whitespace from blank line

(W293)

src/tests/test_fixtures.py (1)

33-51: LGTM! Good test coverage for claims fixtures.

The test thoroughly validates the structure and content of the claims fixtures, ensuring they are correctly typed and non-empty.

🧰 Tools
🪛 Ruff (0.11.9)

38-38: Blank line contains whitespace

Remove whitespace from blank line

(W293)


41-41: Trailing whitespace

Remove trailing whitespace

(W291)


42-42: Trailing whitespace

Remove trailing whitespace

(W291)


43-43: Trailing whitespace

Remove trailing whitespace

(W291)


46-46: Blank line contains whitespace

Remove whitespace from blank line

(W293)

src/tests/test_verifact_manager.py (2)

44-60: LGTM! Well-structured test for claim detection.

The test correctly validates the _detect_claims method functionality, including appropriate mocking of the Runner and verification of results.

🧰 Tools
🪛 Ruff (0.11.9)

51-51: Blank line contains whitespace

Remove whitespace from blank line

(W293)


54-54: Blank line contains whitespace

Remove whitespace from blank line

(W293)


101-147: LGTM! Thorough test for verdict generation.

The test properly validates the _generate_all_verdicts method's handling of claims with and without evidence, ensuring it correctly skips claims without evidence.

🧰 Tools
🪛 Ruff (0.11.9)

111-111: Blank line contains whitespace

Remove whitespace from blank line

(W293)


114-114: Blank line contains whitespace

Remove whitespace from blank line

(W293)


131-131: Blank line contains whitespace

Remove whitespace from blank line

(W293)


137-137: Blank line contains whitespace

Remove whitespace from blank line

(W293)


139-139: Blank line contains whitespace

Remove whitespace from blank line

(W293)


142-142: Blank line contains whitespace

Remove whitespace from blank line

(W293)

src/tests/integration/test_pipeline_integration.py (1)

160-196: LGTM! Well-designed agent integration test.

The test effectively validates the integration between the three main agents and the VerifactManager, with thorough assertions on call counts and return values.

src/tests/test_data_flow.py (1)

324-380: max_claims test will fail until manager enforces the limit

test_max_claims_limit assumes the pipeline truncates work to config.max_claims.
Implement the filtering step suggested in verifact_manager.py (see earlier comment). Once done, the assertions here will pass.

No changes required in the test itself.

🧰 Tools
🪛 Ruff (0.11.9)

331-331: Blank line contains whitespace

Remove whitespace from blank line

(W293)


336-336: Blank line contains whitespace

Remove whitespace from blank line

(W293)


345-345: Blank line contains whitespace

Remove whitespace from blank line

(W293)


359-359: Blank line contains whitespace

Remove whitespace from blank line

(W293)


371-371: Blank line contains whitespace

Remove whitespace from blank line

(W293)


373-373: Blank line contains whitespace

Remove whitespace from blank line

(W293)


376-376: Blank line contains whitespace

Remove whitespace from blank line

(W293)


380-380: Blank line contains whitespace

Remove whitespace from blank line

(W293)

Comment on lines +1 to +3
# Configure OpenAI client to use OpenRouter
from src.utils.openrouter_config import *

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid wildcard imports.
Using from src.utils.openrouter_config import * pollutes the namespace and disables linting for undefined names (F403). It’s better to import only what’s needed or import the module to trigger configuration:

- from src.utils.openrouter_config import *
+ import src.utils.openrouter_config  # initializes OpenRouter settings
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Configure OpenAI client to use OpenRouter
from src.utils.openrouter_config import *
# Configure OpenAI client to use OpenRouter
import src.utils.openrouter_config # initializes OpenRouter settings
🧰 Tools
🪛 Ruff (0.11.9)

2-2: from src.utils.openrouter_config import * used; unable to detect undefined names

(F403)

🤖 Prompt for AI Agents
In src/verifact_agents/claim_detector.py at lines 1 to 3, replace the wildcard
import from src.utils.openrouter_config with explicit imports of only the
necessary functions or classes, or alternatively import the module itself to
trigger configuration without polluting the namespace. This will prevent
namespace pollution and enable proper linting for undefined names.

Comment on lines +12 to +15
load_dotenv()

# Get API key from environment
OPENROUTER_API_KEY = os.getenv("OPENROUTER_API_KEY")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Validate presence of OPENROUTER_API_KEY
If OPENROUTER_API_KEY is missing, openai.api_key will be set to None, leading to unclear errors later. Add a guard clause to fail fast with a clear message when the key is not provided.

Example:

- OPENROUTER_API_KEY = os.getenv("OPENROUTER_API_KEY")
+ OPENROUTER_API_KEY = os.getenv("OPENROUTER_API_KEY")
+ if not OPENROUTER_API_KEY:
+     raise RuntimeError("Environment variable OPENROUTER_API_KEY must be set")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
load_dotenv()
# Get API key from environment
OPENROUTER_API_KEY = os.getenv("OPENROUTER_API_KEY")
load_dotenv()
# Get API key from environment
OPENROUTER_API_KEY = os.getenv("OPENROUTER_API_KEY")
if not OPENROUTER_API_KEY:
raise RuntimeError("Environment variable OPENROUTER_API_KEY must be set")
🤖 Prompt for AI Agents
In src/utils/openrouter_config.py around lines 12 to 15, the code retrieves
OPENROUTER_API_KEY from the environment but does not check if it is present. Add
a guard clause immediately after loading the key to check if OPENROUTER_API_KEY
is None or empty, and if so, raise an exception or exit with a clear error
message indicating the missing API key. This will ensure the program fails fast
with a clear explanation instead of failing later with unclear errors.

Comment on lines +18 to +23
openai.api_key = OPENROUTER_API_KEY
openai.base_url = "https://openrouter.ai/api/v1"
openai.default_headers = {
"HTTP-Referer": "https://verifact.ai", # Replace with your site URL
"X-Title": "VeriFact", # Replace with your site name
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use the correct attribute for OpenAI base URL
The OpenAI Python client expects api_base, not base_url. Setting openai.base_url may be ignored. Update this section to:

- openai.base_url = "https://openrouter.ai/api/v1"
+ openai.api_base = "https://openrouter.ai/api/v1"

Also confirm that openai.default_headers is supported in your client version or adjust to the proper configuration API.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
openai.api_key = OPENROUTER_API_KEY
openai.base_url = "https://openrouter.ai/api/v1"
openai.default_headers = {
"HTTP-Referer": "https://verifact.ai", # Replace with your site URL
"X-Title": "VeriFact", # Replace with your site name
}
openai.api_key = OPENROUTER_API_KEY
openai.api_base = "https://openrouter.ai/api/v1"
openai.default_headers = {
"HTTP-Referer": "https://verifact.ai", # Replace with your site URL
"X-Title": "VeriFact", # Replace with your site name
}
🤖 Prompt for AI Agents
In src/utils/openrouter_config.py around lines 18 to 23, replace the incorrect
attribute openai.base_url with openai.api_base to correctly set the OpenAI API
base URL. Additionally, verify if openai.default_headers is supported by your
OpenAI client version; if not, update the code to use the appropriate method or
attribute for setting default headers according to the client documentation.

Comment on lines +54 to +71
except Exception as e:
# Fallback to placeholder response in case of errors
response = FactCheckResponse(
claims=[
Claim(
text="Error processing request",
verdict="Unverifiable",
confidence=0.0,
explanation=f"Error: {str(e)}",
sources=[]
)
],
metadata={
"processing_time": f"{time.time() - start_time:.1f}s",
"model_version": "1.0.5",
"error": str(e)
}
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve error handling with more specific exception types.

The current implementation catches all exceptions with a generic handler. Consider catching specific exceptions to provide more targeted error messages and better debugging information.

    try:
        verdicts = await manager.run(request.text)
        # ...process verdicts...
-    except Exception as e:
+    except ValueError as e:
        # Fallback to placeholder response in case of errors
        response = FactCheckResponse(
            claims=[
                Claim(
                    text="Error processing request",
                    verdict="Unverifiable",
                    confidence=0.0,
-                    explanation=f"Error: {str(e)}",
+                    explanation=f"Input validation error: {str(e)}",
                    sources=[]
                )
            ],
            metadata={
                "processing_time": f"{time.time() - start_time:.1f}s",
                "model_version": "1.0.5",
                "error": str(e)
            }
        )
+    except (ConnectionError, TimeoutError) as e:
+        response = FactCheckResponse(
+            claims=[
+                Claim(
+                    text="Error connecting to services",
+                    verdict="Unverifiable",
+                    confidence=0.0,
+                    explanation="Could not connect to required services. Please try again later.",
+                    sources=[]
+                )
+            ],
+            metadata={
+                "processing_time": f"{time.time() - start_time:.1f}s",
+                "model_version": "1.0.5",
+                "error": str(e)
+            }
+        )
+    except Exception as e:
+        # Fallback for unexpected errors
+        response = FactCheckResponse(
+            claims=[
+                Claim(
+                    text="Error processing request",
+                    verdict="Unverifiable",
+                    confidence=0.0,
+                    explanation=f"Unexpected error: {str(e)}",
+                    sources=[]
+                )
+            ],
+            metadata={
+                "processing_time": f"{time.time() - start_time:.1f}s",
+                "model_version": "1.0.5",
+                "error": str(e),
+                "error_type": type(e).__name__
+            }
+        )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
except Exception as e:
# Fallback to placeholder response in case of errors
response = FactCheckResponse(
claims=[
Claim(
text="Error processing request",
verdict="Unverifiable",
confidence=0.0,
explanation=f"Error: {str(e)}",
sources=[]
)
],
metadata={
"processing_time": f"{time.time() - start_time:.1f}s",
"model_version": "1.0.5",
"error": str(e)
}
)
try:
verdicts = await manager.run(request.text)
# ...process verdicts...
except ValueError as e:
# Fallback to placeholder response in case of errors
response = FactCheckResponse(
claims=[
Claim(
text="Error processing request",
verdict="Unverifiable",
confidence=0.0,
explanation=f"Input validation error: {str(e)}",
sources=[]
)
],
metadata={
"processing_time": f"{time.time() - start_time:.1f}s",
"model_version": "1.0.5",
"error": str(e)
}
)
except (ConnectionError, TimeoutError) as e:
response = FactCheckResponse(
claims=[
Claim(
text="Error connecting to services",
verdict="Unverifiable",
confidence=0.0,
explanation="Could not connect to required services. Please try again later.",
sources=[]
)
],
metadata={
"processing_time": f"{time.time() - start_time:.1f}s",
"model_version": "1.0.5",
"error": str(e)
}
)
except Exception as e:
# Fallback for unexpected errors
response = FactCheckResponse(
claims=[
Claim(
text="Error processing request",
verdict="Unverifiable",
confidence=0.0,
explanation=f"Unexpected error: {str(e)}",
sources=[]
)
],
metadata={
"processing_time": f"{time.time() - start_time:.1f}s",
"model_version": "1.0.5",
"error": str(e),
"error_type": type(e).__name__
}
)
🤖 Prompt for AI Agents
In src/api/factcheck.py around lines 54 to 71, the current error handling uses a
generic Exception catch-all, which is too broad. Refactor the except block to
catch more specific exceptions relevant to the operations being performed, such
as network errors, parsing errors, or model inference errors. Add separate
except clauses for these specific exceptions with tailored error messages or
handling logic, and keep a generic except block only as a last fallback to catch
unexpected errors.

Comment on lines +31 to +41
COVERAGE=$($PYTHON -m coverage report | grep TOTAL | awk '{print $4}' | sed 's/%//')
if [ -z "$COVERAGE" ]; then
echo "Could not determine coverage percentage."
exit 1
elif (( $(echo "$COVERAGE < 80" | bc -l 2>/dev/null) )); then
echo "Coverage is below 80% (${COVERAGE}%)"
exit 1
else
echo "Coverage is at or above 80% (${COVERAGE}%)"
exit 0
fi
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add fallback for bc command.

The script uses bc for comparing floating point numbers, but this utility might not be available on all systems. Consider adding a fallback using Python.

# Check if coverage is at least 80%
COVERAGE=$($PYTHON -m coverage report | grep TOTAL | awk '{print $4}' | sed 's/%//')
if [ -z "$COVERAGE" ]; then
    echo "Could not determine coverage percentage."
    exit 1
-elif (( $(echo "$COVERAGE < 80" | bc -l 2>/dev/null) )); then
+elif ! command -v bc &> /dev/null || (( $(echo "$COVERAGE < 80" | bc -l 2>/dev/null) )); then
+    # If bc isn't available or coverage is less than 80%
+    # Try with Python as a fallback
+    if command -v bc &> /dev/null; then
+        BELOW_THRESHOLD=true
+    else
+        BELOW_THRESHOLD=$($PYTHON -c "print($COVERAGE < 80)")
+        if [ "$BELOW_THRESHOLD" = "True" ]; then
+            BELOW_THRESHOLD=true
+        else
+            BELOW_THRESHOLD=false
+        fi
+    fi
+    
+    if [ "$BELOW_THRESHOLD" = true ]; then
        echo "Coverage is below 80% (${COVERAGE}%)"
        exit 1
+    else
+        echo "Coverage is at or above 80% (${COVERAGE}%)"
+        exit 0
+    fi
else
    echo "Coverage is at or above 80% (${COVERAGE}%)"
    exit 0
fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
COVERAGE=$($PYTHON -m coverage report | grep TOTAL | awk '{print $4}' | sed 's/%//')
if [ -z "$COVERAGE" ]; then
echo "Could not determine coverage percentage."
exit 1
elif (( $(echo "$COVERAGE < 80" | bc -l 2>/dev/null) )); then
echo "Coverage is below 80% (${COVERAGE}%)"
exit 1
else
echo "Coverage is at or above 80% (${COVERAGE}%)"
exit 0
fi
COVERAGE=$($PYTHON -m coverage report | grep TOTAL | awk '{print $4}' | sed 's/%//')
if [ -z "$COVERAGE" ]; then
echo "Could not determine coverage percentage."
exit 1
elif ! command -v bc &> /dev/null || (( $(echo "$COVERAGE < 80" | bc -l 2>/dev/null) )); then
# If bc isn't available or coverage is less than 80%
# Try with Python as a fallback
if command -v bc &> /dev/null; then
BELOW_THRESHOLD=true
else
BELOW_THRESHOLD=$($PYTHON -c "print($COVERAGE < 80)")
if [ "$BELOW_THRESHOLD" = "True" ]; then
BELOW_THRESHOLD=true
else
BELOW_THRESHOLD=false
fi
fi
if [ "$BELOW_THRESHOLD" = true ]; then
echo "Coverage is below 80% (${COVERAGE}%)"
exit 1
else
echo "Coverage is at or above 80% (${COVERAGE}%)"
exit 0
fi
else
echo "Coverage is at or above 80% (${COVERAGE}%)"
exit 0
fi
🤖 Prompt for AI Agents
In run_tests_with_coverage.sh around lines 31 to 41, the script uses the bc
command to compare floating point numbers, which may not be available on all
systems. Modify the script to check if bc is installed; if not, use a Python
one-liner as a fallback to perform the floating point comparison for coverage
percentage. This ensures compatibility across environments without relying
solely on bc.

Comment on lines +60 to +72
nonlocal call_count
agent = args[0]

if agent.__dict__.get('name') == 'ClaimDetector':
return MockDataFactory.create_runner_result_mock(claims)
elif agent.__dict__.get('name') == 'EvidenceHunter':
call_count += 1
if call_count == 2:
# Simulate timeout for the second claim
raise asyncio.TimeoutError("Evidence gathering timed out")
return MockDataFactory.create_runner_result_mock(evidence_map[claims[0].text])
elif agent.__dict__.get('name') == 'VerdictWriter':
return MockDataFactory.create_runner_result_mock(verdicts[0])
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Robust agent-name detection in mocks

__dict__.get("name") assumes the agent object has a .name attribute and that it lives in __dict__. If the agent switches to a @property or __slots__ implementation this will raise.

Use getattr(agent, "name", None) instead:

-if agent.__dict__.get('name') == 'ClaimDetector':
+if getattr(agent, "name", None) == "ClaimDetector":

Apply to every occurrence throughout the test suite.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
nonlocal call_count
agent = args[0]
if agent.__dict__.get('name') == 'ClaimDetector':
return MockDataFactory.create_runner_result_mock(claims)
elif agent.__dict__.get('name') == 'EvidenceHunter':
call_count += 1
if call_count == 2:
# Simulate timeout for the second claim
raise asyncio.TimeoutError("Evidence gathering timed out")
return MockDataFactory.create_runner_result_mock(evidence_map[claims[0].text])
elif agent.__dict__.get('name') == 'VerdictWriter':
return MockDataFactory.create_runner_result_mock(verdicts[0])
nonlocal call_count
agent = args[0]
if getattr(agent, "name", None) == "ClaimDetector":
return MockDataFactory.create_runner_result_mock(claims)
elif agent.__dict__.get('name') == 'EvidenceHunter':
call_count += 1
if call_count == 2:
# Simulate timeout for the second claim
raise asyncio.TimeoutError("Evidence gathering timed out")
return MockDataFactory.create_runner_result_mock(evidence_map[claims[0].text])
elif agent.__dict__.get('name') == 'VerdictWriter':
return MockDataFactory.create_runner_result_mock(verdicts[0])
🧰 Tools
🪛 Ruff (0.11.9)

62-62: Blank line contains whitespace

Remove whitespace from blank line

(W293)

🤖 Prompt for AI Agents
In src/tests/test_error_recovery.py around lines 60 to 72, replace all
occurrences of agent.__dict__.get('name') with getattr(agent, 'name', None) to
safely access the name attribute without assuming it is stored in __dict__.
Apply this change consistently throughout the test suite to prevent errors if
the agent class uses @property or __slots__ for the name attribute.

Comment on lines +71 to +83
class VerifactManager:
def __init__(self, config: ManagerConfig = None):
self.config = config or ManagerConfig()
self.client = httpx.AsyncClient(
base_url=OPENROUTER_BASE_URL,
headers={
"Authorization": f"Bearer {OPENROUTER_API_KEY}",
"HTTP-Referer": "https://verifact.ai", # Replace with your site URL
"X-Title": "VeriFact", # Replace with your site name
},
timeout=self.config.timeout_seconds,
)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Provide a context-manager interface to guarantee HTTP-client cleanup

AsyncClient must be closed to free sockets; forgetting to call await close() will leak file descriptors in long-running apps.
Implementing __aenter__ / __aexit__ (or a wrapper function) makes correct usage the default:

 class VerifactManager:
+    async def __aenter__(self):
+        return self
+
+    async def __aexit__(self, exc_type, exc, tb):
+        await self.close()

This lets callers simply write:

async with VerifactManager() as manager:
    verdicts = await manager.run(query)
🧰 Tools
🪛 Ruff (0.11.9)

71-71: Missing docstring in public class

(D101)


72-72: Missing docstring in __init__

(D107)

🤖 Prompt for AI Agents
In src/verifact_manager_openrouter.py around lines 71 to 83, the VerifactManager
class creates an AsyncClient but does not ensure it is properly closed, risking
resource leaks. To fix this, implement async context manager methods __aenter__
and __aexit__ in VerifactManager that open and close the AsyncClient
respectively. This allows users to use "async with VerifactManager() as manager"
syntax, ensuring the client is automatically closed after use.

Comment on lines +27 to +33
# Get API key from environment
OPENROUTER_API_KEY = os.getenv("OPENROUTER_API_KEY")
OPENROUTER_BASE_URL = "https://openrouter.ai/api/v1"

# Model configuration
DEFAULT_MODEL = os.getenv("DEFAULT_MODEL", "gpt-4o")
CLAIM_DETECTOR_MODEL = os.getenv("CLAIM_DETECTOR_MODEL", "gpt-4o-mini")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fail-fast when OPENROUTER_API_KEY is missing

OPENROUTER_API_KEY is silently read from the environment but never validated.
If the variable is absent the client will still be created and every request will 401, which is harder to debug than a single clear exception at start-up.

-OPENROUTER_API_KEY = os.getenv("OPENROUTER_API_KEY")
+OPENROUTER_API_KEY = os.getenv("OPENROUTER_API_KEY")
+if not OPENROUTER_API_KEY:
+    raise RuntimeError(
+        "Environment variable OPENROUTER_API_KEY is required but not set. "
+        "Create a .env file (see .env.template) or export it in your shell."
+    )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Get API key from environment
OPENROUTER_API_KEY = os.getenv("OPENROUTER_API_KEY")
OPENROUTER_BASE_URL = "https://openrouter.ai/api/v1"
# Model configuration
DEFAULT_MODEL = os.getenv("DEFAULT_MODEL", "gpt-4o")
CLAIM_DETECTOR_MODEL = os.getenv("CLAIM_DETECTOR_MODEL", "gpt-4o-mini")
# Get API key from environment
OPENROUTER_API_KEY = os.getenv("OPENROUTER_API_KEY")
if not OPENROUTER_API_KEY:
raise RuntimeError(
"Environment variable OPENROUTER_API_KEY is required but not set. "
"Create a .env file (see .env.template) or export it in your shell."
)
OPENROUTER_BASE_URL = "https://openrouter.ai/api/v1"
# Model configuration
DEFAULT_MODEL = os.getenv("DEFAULT_MODEL", "gpt-4o")
CLAIM_DETECTOR_MODEL = os.getenv("CLAIM_DETECTOR_MODEL", "gpt-4o-mini")
🤖 Prompt for AI Agents
In src/verifact_manager_openrouter.py around lines 27 to 33, the
OPENROUTER_API_KEY is read from the environment but not validated, causing
unclear 401 errors later. Add a check immediately after reading the key to
verify it is not None or empty, and if it is missing, raise a clear exception to
fail fast at startup with an informative error message.

Comment on lines +115 to +131
async def _call_openrouter(self, prompt: str, model: str = DEFAULT_MODEL) -> str:
"""Call OpenRouter API with the given prompt."""
response = await self.client.post(
"/chat/completions",
json={
"model": model,
"messages": [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": prompt}
],
"temperature": MODEL_TEMPERATURE,
"max_tokens": MODEL_MAX_TOKENS,
},
)
response.raise_for_status()
result = response.json()
return result["choices"][0]["message"]["content"]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Harden _call_openrouter against malformed responses

result["choices"][0]["message"]["content"] will raise KeyError/IndexError
if OpenRouter changes its schema or returns an error payload. Add validation
with informative logging:

-        result = response.json()
-        return result["choices"][0]["message"]["content"]
+        result = response.json()
+        try:
+            return result["choices"][0]["message"]["content"]
+        except (KeyError, IndexError, TypeError):
+            logger.error("Unexpected OpenRouter response: %s", result)
+            raise RuntimeError("OpenRouter response schema changed")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def _call_openrouter(self, prompt: str, model: str = DEFAULT_MODEL) -> str:
"""Call OpenRouter API with the given prompt."""
response = await self.client.post(
"/chat/completions",
json={
"model": model,
"messages": [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": prompt}
],
"temperature": MODEL_TEMPERATURE,
"max_tokens": MODEL_MAX_TOKENS,
},
)
response.raise_for_status()
result = response.json()
return result["choices"][0]["message"]["content"]
async def _call_openrouter(self, prompt: str, model: str = DEFAULT_MODEL) -> str:
"""Call OpenRouter API with the given prompt."""
response = await self.client.post(
"/chat/completions",
json={
"model": model,
"messages": [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": prompt}
],
"temperature": MODEL_TEMPERATURE,
"max_tokens": MODEL_MAX_TOKENS,
},
)
response.raise_for_status()
result = response.json()
try:
return result["choices"][0]["message"]["content"]
except (KeyError, IndexError, TypeError):
logger.error("Unexpected OpenRouter response: %s", result)
raise RuntimeError("OpenRouter response schema changed")
🤖 Prompt for AI Agents
In src/verifact_manager_openrouter.py around lines 115 to 131, the method
_call_openrouter directly accesses nested keys in the response JSON without
validation, which can raise KeyError or IndexError if the response schema
changes or contains errors. To fix this, add checks to verify that the keys
"choices", the first element in the list, "message", and "content" exist before
accessing them. If any key is missing or the list is empty, log an informative
error message including the full response content and raise an appropriate
exception or handle the error gracefully.

Comment on lines +50 to +104
def create_claim(
cls,
text: Optional[str] = None,
domain: Optional[str] = None,
context: Optional[float] = None,
language: Optional[str] = None,
controversial: bool = False,
) -> Claim:
"""Create a mock claim.

Args:
text: The claim text. If None, a random claim will be generated.
domain: The domain of the claim. If None, a random domain will be chosen.
context: The context score. If None, a random score will be generated.
language: The language of the claim. If None, English will be used.
controversial: Whether the claim should be controversial.

Returns:
A mock Claim object.
"""
if text is None:
domain = domain or random.choice(cls.DOMAINS)
language_code = language or "en"
language_name = cls.LANGUAGES.get(language_code, "English")

if domain == "politics":
if controversial:
text = f"The {language_name} government is corrupt and serves only the elite."
else:
text = f"The {language_name} parliament has 500 members."
elif domain == "health":
if controversial:
text = f"Alternative medicine is more effective than conventional medicine in {language_name}-speaking countries."
else:
text = f"Regular exercise reduces the risk of heart disease in {language_name}-speaking populations."
elif domain == "science":
if controversial:
text = f"Climate change is not caused by human activities according to {language_name} scientists."
else:
text = f"Water freezes at 0 degrees Celsius at standard pressure according to {language_name} textbooks."
elif domain == "economics":
if controversial:
text = f"Cryptocurrency will replace traditional banking in {language_name}-speaking countries."
else:
text = f"The GDP of {language_name}-speaking countries grew by 2.5% last year."
else:
if controversial:
text = f"Social media is destroying society in {language_name}-speaking regions."
else:
text = f"The internet was invented in the 1960s according to {language_name} historical records."

if context is None:
context = round(random.uniform(0.5, 1.0), 2)

return Claim(text=text, context=context)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

High cyclomatic complexity in create_claim

create_claim handles six domains, multilingual logic and controversial
variants in a single 50-line branchy block (complexity 14 > 8). Splitting into
per-domain helper functions (e.g. _claim_politics, _claim_health) will make
the intent clearer and simplify future maintenance.

Example outline:

_domain_generators = {
    "politics": _claim_politics,
    "health": _claim_health,
    ...
}
text = _domain_generators[domain](language_name, controversial)
🧰 Tools
🪛 Ruff (0.11.9)

52-52: Use X | Y for type annotations

Convert to X | Y

(UP007)


53-53: Use X | Y for type annotations

Convert to X | Y

(UP007)


54-54: Use X | Y for type annotations

Convert to X | Y

(UP007)


55-55: Use X | Y for type annotations

Convert to X | Y

(UP007)


59-59: Blank line contains whitespace

Remove whitespace from blank line

(W293)


66-66: Blank line contains whitespace

Remove whitespace from blank line

(W293)


74-74: Blank line contains whitespace

Remove whitespace from blank line

(W293)


100-100: Blank line contains whitespace

Remove whitespace from blank line

(W293)


103-103: Blank line contains whitespace

Remove whitespace from blank line

(W293)

🪛 GitHub Check: Codacy Static Code Analysis

[warning] 50-50: src/tests/utils/mock_data_factory.py#L50
Method create_claim has a cyclomatic complexity of 14 (limit is 8)


[warning] 71-71: src/tests/utils/mock_data_factory.py#L71
Standard pseudo-random generators are not suitable for security/cryptographic purposes.


[warning] 102-102: src/tests/utils/mock_data_factory.py#L102
Standard pseudo-random generators are not suitable for security/cryptographic purposes.

🤖 Prompt for AI Agents
In src/tests/utils/mock_data_factory.py between lines 50 and 104, the
create_claim method has high cyclomatic complexity due to handling multiple
domains and controversial variants in a single large conditional block. Refactor
by extracting the claim text generation for each domain into separate helper
functions named like _claim_politics, _claim_health, etc. Then create a
dictionary mapping domain names to these helper functions and call the
appropriate one based on the domain to generate the text. This will simplify the
logic, reduce complexity, and improve maintainability.

@dean2727
Copy link
Copy Markdown
Collaborator

Hi @stevenrayhinojosa-gmail-com ! I took a look through your code and appreciate your effort on bringing in OpenRouter. Can we remove the duplicate verifact manager file (verifact_manager_openrouter.py), and instead set up our agents/runner as they are originally coded so that OpenRouter is in use? Feel free to reference the OpenAI tutorial notebook in the main branch (one of the last sections, on OpenRouter).

@stevenrayhinojosa-gmail-com
Copy link
Copy Markdown
Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants