diff --git a/.github/workflows/claude-code-review.yml b/.github/workflows/claude-code-review.yml deleted file mode 100644 index 205b0fe..0000000 --- a/.github/workflows/claude-code-review.yml +++ /dev/null @@ -1,57 +0,0 @@ -name: Claude Code Review - -on: - pull_request: - types: [opened, synchronize] - # Optional: Only run on specific file changes - # paths: - # - "src/**/*.ts" - # - "src/**/*.tsx" - # - "src/**/*.js" - # - "src/**/*.jsx" - -jobs: - claude-review: - # Optional: Filter by PR author - # if: | - # github.event.pull_request.user.login == 'external-contributor' || - # github.event.pull_request.user.login == 'new-developer' || - # github.event.pull_request.author_association == 'FIRST_TIME_CONTRIBUTOR' - - runs-on: ubuntu-latest - permissions: - contents: read - pull-requests: read - issues: read - id-token: write - - steps: - - name: Checkout repository - uses: actions/checkout@v4 - with: - fetch-depth: 1 - - - name: Run Claude Code Review - id: claude-review - uses: anthropics/claude-code-action@v1 - with: - claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }} - prompt: | - REPO: ${{ github.repository }} - PR NUMBER: ${{ github.event.pull_request.number }} - - Please review this pull request and provide feedback on: - - Code quality and best practices - - Potential bugs or issues - - Performance considerations - - Security concerns - - Test coverage - - Use the repository's CLAUDE.md for guidance on style and conventions. Be constructive and helpful in your feedback. - - Use `gh pr comment` with your Bash tool to leave your review as a comment on the PR. - - # See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md - # or https://docs.claude.com/en/docs/claude-code/cli-reference for available options - claude_args: '--allowed-tools "Bash(gh issue view:*),Bash(gh search:*),Bash(gh issue list:*),Bash(gh pr comment:*),Bash(gh pr diff:*),Bash(gh pr view:*),Bash(gh pr list:*)"' - diff --git a/.gitignore b/.gitignore index 2358604..6c37896 100644 --- a/.gitignore +++ b/.gitignore @@ -24,3 +24,4 @@ dmypy.json # Python version .python-version +docs/superpowers diff --git a/CHANGELOG.md b/CHANGELOG.md index c1ab03a..3bb25b3 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -7,6 +7,21 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 --- +## [2.0.3] - Unreleased + +### Added +- **Per-endpoint rate limiting**: Scryfall enforces tiered rate limits. `Search`, `Named`, `Random`, and `Collection` endpoints now automatically enforce 2 requests/second; all other endpoints enforce 10 requests/second. See [Scryfall rate limit docs](https://scryfall.com/docs/api/rate-limits). +- **`SlowRateLimiter` subclass**: New rate limiter class for endpoints with stricter limits. Endpoint classes declare their tier via `_rate_limiter_class` class variable. +- **Per-class limiter registry**: Each `RateLimiter` subclass maintains its own global instance, allowing independent rate limits per endpoint category. + +### Changed +- `rate_limit_per_second` kwarg now creates a per-instance limiter scoped to the handler's lifecycle. Separate instantiations do not coordinate with each other; pagination within a single handler (`iter_all()`) is properly throttled. +- `reset_global_limiter()` renamed to `reset_all_limiters()`. The old name still works but emits a `DeprecationWarning`. +- `get_global_limiter()` no longer accepts a `calls_per_second` argument. Passing one emits a `DeprecationWarning` and the value is ignored. Rate is determined by the class default. +- Passing `rate_limit_per_second` to `iter_all()` now emits a `UserWarning` and is ignored. The kwarg must be set at construction time. + +--- + ## [2.0.0] - 2025-01-11 (Rewrite Branch) Major refactoring and modernization of the Scrython library with significant improvements to code quality, type safety, and usability. diff --git a/CLAUDE.md b/CLAUDE.md index 0c46866..7c1f96f 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -51,6 +51,8 @@ The library uses a **base request handler** (`ScrythonRequestHandler`) combined scrython/ ├── base.py # ScrythonRequestHandler, ScryfallError ├── base_mixins.py # ScryfallListMixin, ScryfallCatalogMixin +├── rate_limiter.py # RateLimiter, SlowRateLimiter (per-endpoint tiering) +├── cache.py # Request caching with TTL ├── utils.py # Utility functions (e.g., to_object_array) ├── cards/ │ ├── cards.py # Card endpoint classes + Cards factory @@ -91,11 +93,11 @@ From Contributing.md: ## Important Notes -- **No rate limiting**: Library doesn't enforce Scryfall's rate limits - users must handle this themselves (e.g., `time.sleep(0.1)`) +- **Built-in rate limiting**: Automatic per-endpoint rate limiting (10/s default, 2/s for search/named/random/collection). See `scrython/rate_limiter.py`. Users can override with `rate_limit_per_second` kwarg or disable with `rate_limit=False`. - **No backwards compatibility**: Breaking changes expected as Scryfall API evolves -- **Walrus operator usage**: Code uses `:=` (requires Python 3.8+) -- **Dependencies**: python >= 3.5.3, asyncio >= 3.4.3, aiohttp >= 3.4.4 (though current code uses urllib, not aiohttp) -- **Branches**: `master` is stable/PyPI, `develop` is staging (per Contributing.md, though current branch is `rewrite`) +- **Python 3.10+ required**: Uses `X | Y` union syntax and `type[X]` annotations throughout +- **Dependencies**: urllib (standard library), no external HTTP dependencies +- **Branches**: `main` is stable/PyPI, `develop` is staging ## Adding New Endpoints diff --git a/README.md b/README.md index 129688f..372304b 100644 --- a/README.md +++ b/README.md @@ -14,9 +14,13 @@ pip install scrython ## ⚠️ Important: Rate Limiting -**Good news!** Scrython 2.0 now includes **built-in rate limiting** enabled by default, enforcing Scryfall's 10 requests/second guideline automatically. You no longer need to manually add delays between requests. +**Good news!** Scrython 2.0 includes **built-in rate limiting** enabled by default. You no longer need to manually add delays between requests. -**Scryfall requires 50-100 milliseconds delay between requests** (~10 requests/second maximum). +Scrython automatically enforces Scryfall's tiered rate limits: +- **10 requests/second** for most endpoints (cards by ID, sets, bulk data, autocomplete, etc.) +- **2 requests/second** for heavier endpoints: `Search`, `Named`, `Random`, and `Collection` + +See [Scryfall's rate limit documentation](https://scryfall.com/docs/api/rate-limits) for details. ### Automatic Rate Limiting (Default): @@ -27,17 +31,19 @@ import scrython cards_to_fetch = ['Lightning Bolt', 'Counterspell', 'Black Lotus'] for card_name in cards_to_fetch: - card = scrython.cards.Named(fuzzy=card_name) # Automatically rate limited + card = scrython.cards.Named(fuzzy=card_name) # Automatically rate limited (2/s) print(f"{card.name} - {card.set}") ``` ### Custom Rate Limits: ```python -# Use a slower rate (5 requests/second) +# Override the rate for a specific call (5 requests/second) +# Note: the override is scoped to this handler instance only. +# Separate instantiations each get their own limiter. card = scrython.cards.Named(fuzzy='Lightning Bolt', rate_limit_per_second=5) -# Disable rate limiting (use with caution!) +# Disable rate limiting entirely (use with caution!) card = scrython.cards.Named(fuzzy='Lightning Bolt', rate_limit=False) ``` @@ -53,7 +59,7 @@ import time for card_name in cards_to_fetch: card = scrython.cards.Named(fuzzy=card_name, rate_limit=False) print(f"{card.name} - {card.set}") - time.sleep(0.1) # 100ms delay + time.sleep(0.5) # 500ms delay for Named endpoint ``` ### Better: Use Bulk Data for Large Datasets diff --git a/gen_docs.py b/gen_docs.py index 3816f70..067a18e 100644 --- a/gen_docs.py +++ b/gen_docs.py @@ -68,7 +68,6 @@ def format_functions(_class, function_list, f): def main(subpackage): for _class in subpackage.__all__: - intro = f""" These docs will likely not be as detailed as the official Scryfall Documentation, and you should reference that for more information. diff --git a/scripts/update_version_for_testpypi.py b/scripts/update_version_for_testpypi.py index 5294a08..c898c6d 100644 --- a/scripts/update_version_for_testpypi.py +++ b/scripts/update_version_for_testpypi.py @@ -20,6 +20,7 @@ def get_unique_identifier() -> str: # Fallback for local testing: use timestamp from datetime import datetime, timezone + return datetime.now(timezone.utc).strftime("%Y%m%d%H%M%S") @@ -51,7 +52,7 @@ def update_version_in_pyproject() -> str: f'version = "{new_version}"', content, count=1, - flags=re.MULTILINE + flags=re.MULTILINE, ) # Write back (temporary, only in workflow workspace) diff --git a/scrython/base.py b/scrython/base.py index d0d083a..130fae2 100644 --- a/scrython/base.py +++ b/scrython/base.py @@ -16,8 +16,8 @@ def __init__(self, scryfall_data: dict[str, Any], *args: Any, **kwargs: Any) -> self._status: int = scryfall_data["status"] self._code: str = scryfall_data["code"] self._details: str = scryfall_data["details"] - self._type: str | None = scryfall_data["type"] - self._warnings: list[str] | None = scryfall_data["warnings"] + self._type: str | None = scryfall_data.get("type") + self._warnings: list[str] | None = scryfall_data.get("warnings") @property def status(self) -> int: @@ -58,6 +58,8 @@ class ScrythonRequestHandler: _accept: str = "application/json" _content_type: str = "application/json" _endpoint: str = "" + _rate_limiter_class: type[RateLimiter] = RateLimiter + _override_limiter: RateLimiter | None = None @classmethod def set_user_agent(cls, user_agent: str) -> None: @@ -117,6 +119,25 @@ def endpoint(self) -> str: return self._endpoint def __init__(self, **kwargs: Any) -> None: + """ + Initialize a Scryfall API request handler. + + Args: + **kwargs: Endpoint-specific parameters, plus optional: + - rate_limit (bool): Enable rate limiting (default: True) + - rate_limit_per_second (float): Override the default rate limit + for this handler instance. Creates a per-instance limiter, + so separate instantiations do not coordinate with each other + or with the class default limiter. Pagination within a single + handler (e.g., iter_all()) is properly throttled. + - cache (bool): Enable caching (default: False) + - cache_ttl (int): Cache TTL in seconds (default: 3600) + """ + rate_limit_per_second = kwargs.get("rate_limit_per_second") + self._override_limiter: RateLimiter | None = None + if rate_limit_per_second is not None: + self._override_limiter = RateLimiter(rate_limit_per_second) + self._build_path(**kwargs) self._build_params(**kwargs) self._fetch(**kwargs) @@ -139,7 +160,6 @@ def _fetch_raw(self, url: str, cache_key: str | None = None, **kwargs: Any) -> d - cache (bool): Enable caching (default: False) - cache_ttl (int): Cache TTL in seconds (default: 3600) - rate_limit (bool): Enable rate limiting (default: True) - - rate_limit_per_second (float): Rate limit (default: 10.0) - data (dict): POST data (optional) Returns: @@ -165,13 +185,13 @@ def _fetch_raw(self, url: str, cache_key: str | None = None, **kwargs: Any) -> d rate_limit = kwargs.get("rate_limit", True) if rate_limit: - # Get rate limit setting - rate_limit_per_second = kwargs.get("rate_limit_per_second", 10.0) + # Use the instance override limiter if set, otherwise fall back + # to the class-level global limiter for the endpoint's tier. + if self._override_limiter is not None: + limiter = self._override_limiter + else: + limiter = self._rate_limiter_class.get_global_limiter() - # Get or create global rate limiter - limiter = RateLimiter.get_global_limiter(rate_limit_per_second) - - # Wait if necessary to respect rate limit limiter.wait() # Prepare POST data if provided @@ -200,6 +220,20 @@ def _fetch_raw(self, url: str, cache_key: str | None = None, **kwargs: Any) -> d return response_data except urllib.error.HTTPError as exc: + # Scryfall returns JSON error bodies on 4xx/5xx responses. + # urllib raises HTTPError before we can read the body normally, + # but the HTTPError itself is a file-like object containing it. + try: + charset = exc.headers.get_param("charset") + if not isinstance(charset, str): + charset = "utf-8" + error_data = json.loads(exc.read().decode(charset)) + except (json.JSONDecodeError, UnicodeDecodeError): + raise Exception(f"{exc}: {request.get_full_url()}") from exc + + if error_data.get("object") == "error": + raise ScryfallError(error_data, error_data["details"]) from exc + raise Exception(f"{exc}: {request.get_full_url()}") from exc def _fetch(self, **kwargs: Any) -> None: diff --git a/scrython/base_mixins.py b/scrython/base_mixins.py index 19f6c6a..6e97a38 100644 --- a/scrython/base_mixins.py +++ b/scrython/base_mixins.py @@ -1,3 +1,4 @@ +import warnings from functools import cache from typing import Any @@ -105,7 +106,10 @@ def iter_all(self, **kwargs): - cache (bool): Enable caching for pagination (default: False) - cache_ttl (int): Cache TTL in seconds (default: 3600) - rate_limit (bool): Enable rate limiting (default: True) - - rate_limit_per_second (float): Rate limit (default: 10.0) + + Note: + rate_limit_per_second must be set at construction time, not here. + Pagination uses the handler's existing rate limiter. Yields: Individual items from all pages @@ -121,6 +125,15 @@ def iter_all(self, **kwargs): """ import hashlib + if "rate_limit_per_second" in kwargs: + warnings.warn( + "rate_limit_per_second must be set at construction time, " + "not passed to iter_all(). This kwarg is ignored.", + UserWarning, + stacklevel=2, + ) + kwargs.pop("rate_limit_per_second") + # Yield items from current page yield from self.data diff --git a/scrython/cards/cards.py b/scrython/cards/cards.py index 743a4b4..242bbb0 100644 --- a/scrython/cards/cards.py +++ b/scrython/cards/cards.py @@ -1,5 +1,6 @@ from ..base import ScrythonRequestHandler from ..base_mixins import ScryfallCatalogMixin, ScryfallListMixin +from ..rate_limiter import SlowRateLimiter from ..types import ScryfallCardData from .cards_mixins import CardsObjectMixin @@ -158,6 +159,7 @@ class Search(ScryfallListMixin, ScrythonRequestHandler): """ _endpoint = "/cards/search" + _rate_limiter_class = SlowRateLimiter list_data_type = Object @@ -189,6 +191,7 @@ class Named(CardsObjectMixin, ScrythonRequestHandler): """ _endpoint = "/cards/named" + _rate_limiter_class = SlowRateLimiter class Autocomplete(ScryfallCatalogMixin, ScrythonRequestHandler): @@ -244,6 +247,7 @@ class Random(CardsObjectMixin, ScrythonRequestHandler): """ _endpoint = "/cards/random" + _rate_limiter_class = SlowRateLimiter class Collection(ScryfallListMixin, ScrythonRequestHandler): @@ -276,6 +280,7 @@ class Collection(ScryfallListMixin, ScrythonRequestHandler): """ _endpoint = "/cards/collection" + _rate_limiter_class = SlowRateLimiter list_data_type = Object diff --git a/scrython/rate_limiter.py b/scrython/rate_limiter.py index 8e395ac..18f9368 100644 --- a/scrython/rate_limiter.py +++ b/scrython/rate_limiter.py @@ -1,12 +1,17 @@ """Rate limiting for Scryfall API requests. -Scryfall requests a rate limit of 10 requests per second. This module -provides a thread-safe rate limiter that enforces this limit by default, -while allowing users to opt-out or customize the rate limit. +Scryfall requests a rate limit of 10 requests per second for most endpoints, +but enforces a stricter limit on certain card endpoints. + +See: https://scryfall.com/docs/api/rate-limits + +This module provides a thread-safe rate limiter with a per-class registry, +allowing different endpoint categories to maintain independent rate limits. """ import threading import time +import warnings from typing import ClassVar @@ -20,8 +25,8 @@ class RateLimiter: The limiter is thread-safe and can be shared across multiple threads. """ - # Class-level (global) rate limiter shared across all instances - _global_limiter: ClassVar["RateLimiter | None"] = None + # Per-class registry of global rate limiter instances + _global_limiters: ClassVar[dict[type, "RateLimiter"]] = {} _global_lock: ClassVar[threading.Lock] = threading.Lock() def __init__(self, calls_per_second: float = 10.0) -> None: @@ -61,32 +66,64 @@ def wait(self) -> None: self.last_call = time.time() @classmethod - def get_global_limiter(cls, calls_per_second: float = 10.0) -> "RateLimiter": + def get_global_limiter(cls, calls_per_second: float | None = None) -> "RateLimiter": """ - Get or create the global rate limiter instance. + Get or create the global rate limiter for this class. - This ensures that all API requests share a single rate limiter, - preventing rate limit violations even when multiple objects are - created concurrently. + Each RateLimiter class or subclass maintains its own independent + global instance, keyed by class in a shared registry. This allows + different endpoint categories to enforce different rate limits. Args: - calls_per_second: Rate limit to use if creating a new limiter + calls_per_second: Deprecated, ignored. Rate is determined by + the class default. Passing a value emits a DeprecationWarning. Returns: - The global RateLimiter instance + The global RateLimiter instance for this class """ + if calls_per_second is not None: + warnings.warn( + "calls_per_second argument to get_global_limiter() is deprecated " + "and ignored. Rate is determined by the class default.", + DeprecationWarning, + stacklevel=2, + ) with cls._global_lock: - if cls._global_limiter is None: - cls._global_limiter = cls(calls_per_second) - return cls._global_limiter + if cls not in cls._global_limiters: + cls._global_limiters[cls] = cls() + return cls._global_limiters[cls] @classmethod - def reset_global_limiter(cls) -> None: + def reset_all_limiters(cls) -> None: """ - Reset the global rate limiter. + Reset all global rate limiters. - This is primarily useful for testing to ensure a clean state - between test runs. + Clears the entire registry, causing new instances to be created + on the next call to get_global_limiter(). This is primarily + useful for testing to ensure a clean state between test runs. """ with cls._global_lock: - cls._global_limiter = None + cls._global_limiters.clear() + + @classmethod + def reset_global_limiter(cls) -> None: + """Deprecated: use reset_all_limiters() instead.""" + warnings.warn( + "reset_global_limiter() is deprecated, use reset_all_limiters() instead", + DeprecationWarning, + stacklevel=2, + ) + cls.reset_all_limiters() + + +class SlowRateLimiter(RateLimiter): + """ + Rate limiter for Scryfall endpoints with stricter rate limits. + + Scryfall enforces 2 requests per second on certain card endpoints + (search, named, random, collection). This subclass + provides that slower default rate. + """ + + def __init__(self, calls_per_second: float = 2.0) -> None: + super().__init__(calls_per_second) diff --git a/tests/conftest.py b/tests/conftest.py index 465723f..181a5a4 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -1,6 +1,9 @@ """Pytest configuration and shared fixtures for Scrython tests.""" +import http.client +import io import json +import urllib.error from pathlib import Path from unittest.mock import Mock, patch @@ -20,10 +23,10 @@ def reset_globals(): Resets rate limiter and cache to ensure tests don't interfere with each other. """ - RateLimiter.reset_global_limiter() + RateLimiter.reset_all_limiters() reset_global_cache() yield - RateLimiter.reset_global_limiter() + RateLimiter.reset_all_limiters() reset_global_cache() @@ -138,11 +141,10 @@ def set_error_response(self, error_data): def __call__(self, request): # Record the call for assertion purposes + url = request.get_full_url() if hasattr(request, "get_full_url") else str(request) self.calls.append( { - "url": ( - request.get_full_url() if hasattr(request, "get_full_url") else str(request) - ), + "url": url, "method": request.get_method() if hasattr(request, "get_method") else "GET", "headers": dict(request.headers) if hasattr(request, "headers") else {}, } @@ -151,6 +153,23 @@ def __call__(self, request): if self.response_data is None: raise ValueError("No response data set. Call set_response() first.") + # Real urlopen raises HTTPError for 4xx/5xx status codes + if self.status >= 400: + body = ( + self.response_data.encode("utf-8") + if isinstance(self.response_data, str) + else self.response_data + ) + headers = http.client.HTTPMessage() + headers["Content-Type"] = "application/json; charset=utf-8" + raise urllib.error.HTTPError( + url=url, + code=self.status, + msg=f"HTTP Error {self.status}", + hdrs=headers, + fp=io.BytesIO(body), + ) + return MockURLResponse(self.response_data, self.status) mock = MockURLOpen() diff --git a/tests/test_caching.py b/tests/test_caching.py index a06c056..768eb0d 100644 --- a/tests/test_caching.py +++ b/tests/test_caching.py @@ -3,7 +3,7 @@ import contextlib import time -from scrython.base import ScrythonRequestHandler +from scrython.base import ScryfallError, ScrythonRequestHandler from scrython.cache import MemoryCache, generate_cache_key, get_global_cache, reset_global_cache @@ -271,7 +271,7 @@ def test_cache_doesnt_store_errors(self, mock_urlopen): class TestHandler(ScrythonRequestHandler): _endpoint = "cards/named" - with contextlib.suppress(Exception): + with contextlib.suppress(ScryfallError): _handler = TestHandler(fuzzy="Nonexistent", cache=True) # Cache should be empty (errors not cached) diff --git a/tests/test_rate_limiting.py b/tests/test_rate_limiting.py index 29c2605..5a3c711 100644 --- a/tests/test_rate_limiting.py +++ b/tests/test_rate_limiting.py @@ -5,7 +5,7 @@ import pytest -from scrython.base import ScrythonRequestHandler +from scrython.base import ScryfallError, ScrythonRequestHandler from scrython.rate_limiter import RateLimiter @@ -35,8 +35,8 @@ def test_rate_limiter_enforces_delay(self): limiter.wait() first_call_time = time.time() - start - # Should be very fast (< 0.01s) - assert first_call_time < 0.01 + # Should be very fast + assert first_call_time < 0.05 # Second call immediately after should wait start = time.time() @@ -62,7 +62,7 @@ def test_rate_limiter_no_delay_after_interval(self): elapsed = time.time() - start # Should be very fast - assert elapsed < 0.01 + assert elapsed < 0.05 def test_rate_limiter_multiple_calls(self): """Test RateLimiter with multiple sequential calls.""" @@ -75,12 +75,12 @@ def test_rate_limiter_multiple_calls(self): # Should take ~0.2s (4 intervals * 0.05s) # First call is immediate, then 4 waits of 0.05s each - assert 0.15 < elapsed < 0.3 + assert 0.15 < elapsed < 0.5 def test_get_global_limiter_creates_singleton(self): """Test that get_global_limiter returns a singleton.""" # Reset first - RateLimiter.reset_global_limiter() + RateLimiter.reset_all_limiters() limiter1 = RateLimiter.get_global_limiter() limiter2 = RateLimiter.get_global_limiter() @@ -88,19 +88,55 @@ def test_get_global_limiter_creates_singleton(self): # Should be the same instance assert limiter1 is limiter2 - def test_reset_global_limiter(self): - """Test that reset_global_limiter clears the singleton.""" + def test_reset_all_limiters(self): + """Test that reset_all_limiters clears the singleton.""" # Create a global limiter limiter1 = RateLimiter.get_global_limiter() # Reset - RateLimiter.reset_global_limiter() + RateLimiter.reset_all_limiters() # Get another - should be a new instance limiter2 = RateLimiter.get_global_limiter() assert limiter1 is not limiter2 + def test_global_limiter_registry_per_class(self): + """Test that different RateLimiter subclasses get independent global instances.""" + from scrython.rate_limiter import SlowRateLimiter + + limiter_fast = RateLimiter.get_global_limiter() + limiter_slow = SlowRateLimiter.get_global_limiter() + + assert limiter_fast is not limiter_slow + assert limiter_fast.calls_per_second == 10.0 + assert limiter_slow.calls_per_second == 2.0 + + def test_reset_all_limiters_clears_all(self): + """Test that reset clears the entire registry.""" + from scrython.rate_limiter import SlowRateLimiter + + old_fast = RateLimiter.get_global_limiter() + old_slow = SlowRateLimiter.get_global_limiter() + + RateLimiter.reset_all_limiters() + + new_fast = RateLimiter.get_global_limiter() + new_slow = SlowRateLimiter.get_global_limiter() + + assert new_fast is not old_fast + assert new_slow is not old_slow + + def test_get_global_limiter_deprecated_param(self): + """Test that passing calls_per_second to get_global_limiter emits a deprecation warning.""" + RateLimiter.reset_all_limiters() + + with pytest.warns(DeprecationWarning, match="calls_per_second.*deprecated"): + limiter = RateLimiter.get_global_limiter(5.0) + + # Should still return the default limiter (argument is ignored) + assert limiter.calls_per_second == 10.0 + class TestRequestHandlerRateLimiting: """Test rate limiting integration with ScrythonRequestHandler.""" @@ -108,7 +144,10 @@ class TestRequestHandlerRateLimiting: @pytest.fixture def mock_urlopen_with_rate_limit(self): """Mock urlopen without disabling rate limiting.""" + import http.client + import io import json + import urllib.error from unittest.mock import Mock, patch class MockURLResponse: @@ -157,13 +196,10 @@ def set_error_response(self, error_data): self.status = error_data.get("status", 404) def __call__(self, request): + url = request.get_full_url() if hasattr(request, "get_full_url") else str(request) self.calls.append( { - "url": ( - request.get_full_url() - if hasattr(request, "get_full_url") - else str(request) - ), + "url": url, "method": request.get_method() if hasattr(request, "get_method") else "GET", "headers": dict(request.headers) if hasattr(request, "headers") else {}, } @@ -172,6 +208,23 @@ def __call__(self, request): if self.response_data is None: raise ValueError("No response data set. Call set_response() first.") + # Real urlopen raises HTTPError for 4xx/5xx status codes + if self.status >= 400: + body = ( + self.response_data.encode("utf-8") + if isinstance(self.response_data, str) + else self.response_data + ) + headers = http.client.HTTPMessage() + headers["Content-Type"] = "application/json; charset=utf-8" + raise urllib.error.HTTPError( + url=url, + code=self.status, + msg=f"HTTP Error {self.status}", + hdrs=headers, + fp=io.BytesIO(body), + ) + return MockURLResponse(self.response_data, self.status) mock = MockURLOpen() @@ -182,7 +235,7 @@ def __call__(self, request): def test_rate_limit_enabled_by_default(self, mock_urlopen_with_rate_limit, sample_card): """Test that rate limiting is enabled by default.""" # Reset rate limiter - RateLimiter.reset_global_limiter() + RateLimiter.reset_all_limiters() mock_urlopen_with_rate_limit.set_response(data=sample_card) @@ -201,7 +254,7 @@ class TestHandler(ScrythonRequestHandler): def test_rate_limit_can_be_disabled(self, mock_urlopen_with_rate_limit, sample_card): """Test that rate limiting can be disabled.""" # Reset rate limiter - RateLimiter.reset_global_limiter() + RateLimiter.reset_all_limiters() mock_urlopen_with_rate_limit.set_response(data=sample_card) @@ -215,31 +268,115 @@ class TestHandler(ScrythonRequestHandler): elapsed = time.time() - start # Should be very fast (no rate limiting) - assert elapsed < 0.05 + assert elapsed < 0.3 + + def test_default_rate_limiter_class_is_base(self): + """Test that ScrythonRequestHandler defaults to RateLimiter.""" + assert ScrythonRequestHandler._rate_limiter_class is RateLimiter + + def test_slow_endpoint_uses_slow_limiter(self, mock_urlopen_with_rate_limit, sample_card): + """Test that an endpoint with SlowRateLimiter uses the slow rate.""" + from scrython.rate_limiter import SlowRateLimiter + + mock_urlopen_with_rate_limit.set_response(data=sample_card) + + class SlowHandler(ScrythonRequestHandler): + _endpoint = "cards/search" + _rate_limiter_class = SlowRateLimiter + + start = time.time() + _handler1 = SlowHandler(q="Card 1") + _handler2 = SlowHandler(q="Card 2") + elapsed = time.time() - start + + # SlowRateLimiter at 2/s means ~0.5s between calls + assert elapsed > 0.45 + + def test_rate_limit_per_second_kwarg_overrides_class( + self, mock_urlopen_with_rate_limit, sample_card + ): + """Test that rate_limit_per_second kwarg overrides the class default.""" + from scrython.rate_limiter import SlowRateLimiter + + mock_urlopen_with_rate_limit.set_response(data=sample_card) + + class SlowHandler(ScrythonRequestHandler): + _endpoint = "cards/search" + _rate_limiter_class = SlowRateLimiter + + # Override to a fast rate — should NOT wait 500ms + start = time.time() + _handler1 = SlowHandler(q="Card 1", rate_limit_per_second=20.0) + _handler2 = SlowHandler(q="Card 2", rate_limit_per_second=20.0) + elapsed = time.time() - start + + # Should be fast (~0.05s), not slow (~0.5s) + assert elapsed < 0.5 def test_custom_rate_limit(self, mock_urlopen_with_rate_limit, sample_card): - """Test that custom rate limits can be specified.""" + """Test that rate_limit_per_second creates a per-instance limiter.""" # Reset rate limiter - RateLimiter.reset_global_limiter() + RateLimiter.reset_all_limiters() mock_urlopen_with_rate_limit.set_response(data=sample_card) class TestHandler(ScrythonRequestHandler): _endpoint = "cards/named" - # Use a slower rate limit (5 calls/sec = 0.2s interval) + # Each handler gets its own limiter, so separate instantiations + # do not throttle against each other (only pagination within + # a single handler instance is throttled). start = time.time() _handler1 = TestHandler(fuzzy="Card 1", rate_limit_per_second=5.0) _handler2 = TestHandler(fuzzy="Card 2", rate_limit_per_second=5.0) elapsed = time.time() - start - # Should wait ~0.2s - assert elapsed > 0.18 + assert elapsed < 0.2 + + def test_custom_rate_limit_throttles_within_instance( + self, mock_urlopen_with_rate_limit, sample_card + ): + """Test that rate_limit_per_second throttles repeated calls on the same handler.""" + RateLimiter.reset_all_limiters() + + mock_urlopen_with_rate_limit.set_response(data=sample_card) + + class TestHandler(ScrythonRequestHandler): + _endpoint = "cards/named" + + handler = TestHandler(fuzzy="Card 1", rate_limit_per_second=5.0) + + # Call _fetch_raw again on the same instance (simulates pagination) + start = time.time() + handler._fetch_raw( + "https://api.scryfall.com/cards/named?fuzzy=Card+2", + rate_limit=True, + ) + elapsed = time.time() - start + + # Should wait ~0.2s (5 calls/sec = 200ms interval) + assert elapsed > 0.15 + + def test_slow_rate_limiter_attributes_via_global( + self, mock_urlopen_with_rate_limit, sample_card + ): + """Test that SlowRateLimiter global instance has correct default attributes.""" + from scrython.rate_limiter import SlowRateLimiter + + # Reset rate limiter + RateLimiter.reset_all_limiters() + + mock_urlopen_with_rate_limit.set_response(data=sample_card) + + limiter = SlowRateLimiter.get_global_limiter() + + assert limiter.calls_per_second == 2.0 + assert limiter.min_interval == 0.5 def test_rate_limit_respects_previous_calls(self, mock_urlopen_with_rate_limit, sample_card): """Test that rate limiting considers timing of previous calls.""" # Reset rate limiter - RateLimiter.reset_global_limiter() + RateLimiter.reset_all_limiters() mock_urlopen_with_rate_limit.set_response(data=sample_card) @@ -258,14 +395,14 @@ class TestHandler(ScrythonRequestHandler): elapsed = time.time() - start # Should wait ~0.05s (remaining time) - assert 0.03 < elapsed < 0.08 + assert 0.03 < elapsed < 0.15 def test_rate_limit_multiple_handlers_share_limiter( self, mock_urlopen_with_rate_limit, sample_card ): """Test that multiple handlers share the same global rate limiter.""" # Reset rate limiter - RateLimiter.reset_global_limiter() + RateLimiter.reset_all_limiters() mock_urlopen_with_rate_limit.set_response(data=sample_card) @@ -287,7 +424,7 @@ class HandlerB(ScrythonRequestHandler): def test_rate_limit_with_errors_still_enforced(self, mock_urlopen_with_rate_limit): """Test that rate limiting is enforced even when API returns errors.""" # Reset rate limiter - RateLimiter.reset_global_limiter() + RateLimiter.reset_all_limiters() mock_urlopen_with_rate_limit.set_error_response( {"status": 404, "code": "not_found", "details": "Not found"} @@ -298,13 +435,80 @@ class TestHandler(ScrythonRequestHandler): # Make two calls that will error start = time.time() - with contextlib.suppress(Exception): + with contextlib.suppress(ScryfallError): _handler1 = TestHandler(fuzzy="Nonexistent 1") - with contextlib.suppress(Exception): + with contextlib.suppress(ScryfallError): _handler2 = TestHandler(fuzzy="Nonexistent 2") elapsed = time.time() - start # Should still be rate limited assert elapsed > 0.08 + + +class TestSlowRateLimiter: + """Test the SlowRateLimiter class.""" + + def test_slow_rate_limiter_default_rate(self): + """Test that SlowRateLimiter defaults to 2 calls per second.""" + from scrython.rate_limiter import SlowRateLimiter + + limiter = SlowRateLimiter() + + assert limiter.calls_per_second == 2.0 + assert limiter.min_interval == 0.5 + + def test_slow_rate_limiter_enforces_delay(self): + """Test that SlowRateLimiter enforces 500ms delays between calls.""" + from scrython.rate_limiter import SlowRateLimiter + + limiter = SlowRateLimiter() + + limiter.wait() + + start = time.time() + limiter.wait() + elapsed = time.time() - start + + assert 0.45 < elapsed < 1.0 + + +class TestEndpointRateLimiterAssignment: + """Test that endpoint classes declare the correct rate limiter class.""" + + def test_search_uses_slow_limiter(self): + from scrython.cards.cards import Search + from scrython.rate_limiter import SlowRateLimiter + + assert Search._rate_limiter_class is SlowRateLimiter + + def test_named_uses_slow_limiter(self): + from scrython.cards.cards import Named + from scrython.rate_limiter import SlowRateLimiter + + assert Named._rate_limiter_class is SlowRateLimiter + + def test_random_uses_slow_limiter(self): + from scrython.cards.cards import Random + from scrython.rate_limiter import SlowRateLimiter + + assert Random._rate_limiter_class is SlowRateLimiter + + def test_collection_uses_slow_limiter(self): + from scrython.cards.cards import Collection + from scrython.rate_limiter import SlowRateLimiter + + assert Collection._rate_limiter_class is SlowRateLimiter + + def test_autocomplete_uses_default_limiter(self): + from scrython.cards.cards import Autocomplete + from scrython.rate_limiter import RateLimiter + + assert Autocomplete._rate_limiter_class is RateLimiter + + def test_by_code_number_uses_default_limiter(self): + from scrython.cards.cards import ByCodeNumber + from scrython.rate_limiter import RateLimiter + + assert ByCodeNumber._rate_limiter_class is RateLimiter