diff --git a/README.md b/README.md index 4319a1f..bf48efa 100644 --- a/README.md +++ b/README.md @@ -30,7 +30,7 @@ Miners receive **limited observations** to prevent overfitting: - **Sybil-proof**: Copies tie under Pareto dominance, no benefit from multiple identities - **Copy-proof**: Must improve on the leader to earn, not just match them - **Specialization-proof**: Must dominate on ALL environments, not just one -- **Deployment verification**: Spot-checks verify Basilica deployments match HuggingFace uploads +- **Deployment verification**: Basilica metadata API verifies deployments use publicly pullable images ### Scoring: ε-Pareto Dominance @@ -151,22 +151,21 @@ cd my-policy # 3. Test locally uvicorn server:app --port 8001 -# 4. Upload to HuggingFace -huggingface-cli upload your-username/kinitro-policy . +# 4. Build and push Docker image +docker build -t your-username/kinitro-policy:v1 . +docker push your-username/kinitro-policy:v1 # 5. Deploy to Basilica export BASILICA_API_TOKEN="your-api-token" uv run kinitro miner push \ - --repo your-username/kinitro-policy \ - --revision YOUR_HF_COMMIT_SHA \ + --image your-username/kinitro-policy:v1 \ + --name my-policy \ --gpu-count 1 --min-vram 16 # 6. Register on chain uv run kinitro miner commit \ - --repo your-username/kinitro-policy \ - --revision YOUR_HF_COMMIT_SHA \ - --endpoint YOUR_BASILICA_URL \ + --deployment-id YOUR_BASILICA_DEPLOYMENT_ID \ --netuid YOUR_NETUID \ --network finney ``` diff --git a/docs/e2e-testing.md b/docs/e2e-testing.md index 25ca63f..7ac113c 100644 --- a/docs/e2e-testing.md +++ b/docs/e2e-testing.md @@ -67,9 +67,7 @@ curl -X POST http://localhost:8001/reset \ # 4. For local testing only - commit to local chain uv run kinitro miner commit \ - --repo test-user/test-policy \ - --revision $(git rev-parse HEAD) \ - --endpoint http://localhost:8001 \ + --deployment-id my-local-deploy \ --netuid 2 \ --network local \ --wallet-name test-wallet \ @@ -91,10 +89,12 @@ Deploy a miner to Basilica for realistic E2E testing: # 1. Initialize policy template uv run kinitro miner init ./test-policy -# 2. Deploy to Basilica (uploads to HuggingFace, deploys, commits on-chain) +# 2. Build, push, and deploy to Basilica +docker build -t /test-policy:v1 ./test-policy +docker push /test-policy:v1 + uv run kinitro miner deploy \ - --repo /test-policy \ - --path ./test-policy \ + --image /test-policy:v1 \ --network $NETWORK \ --netuid $NETUID \ --wallet-name alice \ diff --git a/docs/miner-guide.md b/docs/miner-guide.md index 5bf2c95..0363f68 100644 --- a/docs/miner-guide.md +++ b/docs/miner-guide.md @@ -7,16 +7,17 @@ This guide explains how to participate as a miner in Kinitro. As a miner, you'll The evaluation flow: 1. You train a robotics policy (locally or on your own infrastructure) -2. You upload your model weights to HuggingFace -3. You deploy your policy server to [Basilica](https://basilica.ai) (required for mainnet) -4. You commit your endpoint info on-chain so validators can find you -5. Validators periodically evaluate your policy across multiple environments -6. You earn rewards based on how well your policy generalizes +2. You build a Docker image containing your policy server +3. You push your image to a container registry (e.g., Docker Hub) +4. You deploy your image to [Basilica](https://basilica.ai) (required for mainnet) +5. You commit your deployment ID on-chain so validators can find you +6. Validators periodically evaluate your policy across multiple environments +7. You earn rewards based on how well your policy generalizes ## Requirements - **Training**: GPU compute for training your policy -- **HuggingFace Account**: For storing your model weights +- **Container Registry**: For storing your Docker image (e.g., Docker Hub) - **Basilica Account**: For deploying your policy server (required for mainnet) - **Bittensor Wallet**: For registering as a miner and committing your endpoint @@ -34,13 +35,15 @@ cd my-policy # 3. Test locally uvicorn server:app --port 8001 -# 4. One-command deployment (upload + deploy + commit) -export HF_TOKEN="your-huggingface-token" +# 4. Build and push Docker image +docker build -t your-username/kinitro-policy:v1 . +docker push your-username/kinitro-policy:v1 + +# 5. Deploy to Basilica and commit on-chain export BASILICA_API_TOKEN="your-basilica-api-token" uv run kinitro miner deploy \ - --repo your-username/kinitro-policy \ - --path ./my-policy \ + --image your-username/kinitro-policy:v1 \ --netuid YOUR_NETUID \ --network finney ``` @@ -48,17 +51,14 @@ uv run kinitro miner deploy \ Or do each step separately: ```bash -# Upload to HuggingFace -huggingface-cli upload your-username/kinitro-policy . - # Deploy to Basilica -uv run kinitro miner push --repo your-username/kinitro-policy --revision YOUR_HF_SHA +uv run kinitro miner push \ + --image your-username/kinitro-policy:v1 \ + --name my-policy # Commit on-chain uv run kinitro miner commit \ - --repo your-username/kinitro-policy \ - --revision YOUR_HF_COMMIT_SHA \ - --endpoint YOUR_BASILICA_URL \ + --deployment-id YOUR_BASILICA_DEPLOYMENT_ID \ --netuid YOUR_NETUID ``` @@ -76,8 +76,7 @@ This creates: - `server.py` - FastAPI server with `/reset` and `/act` endpoints (for local testing and Basilica deployment) - `policy.py` - Policy implementation template (edit this!) -- `basilica_deploy.py` - Basilica deployment script -- `Dockerfile` - For containerizing your policy (optional, for self-hosted) +- `Dockerfile` - For containerizing your policy - `requirements.txt` - Python dependencies ## Step 2: Understand the Observation Space @@ -207,25 +206,19 @@ curl -X POST http://localhost:8001/act \ The `server.py` file provides the endpoints (`/health`, `/reset`, `/act`) that validators will call. This lets you test your policy logic locally before deploying to Basilica. -## Step 5: Upload to HuggingFace +## Step 5: Build and Push Docker Image -Before deploying, upload your model weights to HuggingFace: +Build your policy into a Docker image and push to a container registry: ```bash -# Install huggingface-cli if needed -pip install huggingface_hub - -# Login to HuggingFace -huggingface-cli login +# Build the Docker image +docker build -t your-username/kinitro-policy:v1 . -# Create a new model repository -huggingface-cli repo create your-username/kinitro-policy --type model +# Test locally with Docker +docker run -p 8001:8000 your-username/kinitro-policy:v1 -# Upload your model files -huggingface-cli upload your-username/kinitro-policy ./my-policy - -# Note the commit SHA for the on-chain commitment -git ls-remote https://huggingface.co/your-username/kinitro-policy HEAD +# Push to Docker Hub (or any public registry) +docker push your-username/kinitro-policy:v1 ``` ## Step 6: Deploy to Basilica (Required for Mainnet) @@ -247,26 +240,24 @@ export BASILICA_API_TOKEN="your-api-token" # Deploy to Basilica uv run kinitro miner push \ - --repo your-username/kinitro-policy \ - --revision YOUR_HUGGINGFACE_COMMIT_SHA \ + --image your-username/kinitro-policy:v1 \ + --name my-policy \ --gpu-count 1 \ --min-vram 16 ``` -This command: - -1. Downloads your policy from HuggingFace -2. Builds a container image with your policy -3. Deploys to Basilica -4. Returns the endpoint URL for on-chain commitment +This command deploys your pre-built Docker image to Basilica and returns the deployment ID for on-chain commitment. ### Verify Deployment -After deployment, note your **endpoint URL**. You can verify your deployment: +After deployment, note your **deployment ID**. You can verify your deployment: ```bash # Test the endpoint (replace with your URL) curl https://YOUR-DEPLOYMENT-ID.deployments.basilica.ai/health + +# Verify metadata +uv run kinitro miner verify --deployment-id YOUR-DEPLOYMENT-ID ``` ### GPU vs CPU Deployments @@ -275,8 +266,8 @@ For testing, you can deploy without GPU: ```bash uv run kinitro miner push \ - --repo your-username/kinitro-policy \ - --revision YOUR_HF_SHA \ + --image your-username/kinitro-policy:v1 \ + --name my-policy \ --gpu-count 0 # CPU-only for testing ``` @@ -284,30 +275,27 @@ For production with GPU: ```bash uv run kinitro miner push \ - --repo your-username/kinitro-policy \ - --revision YOUR_HF_SHA \ + --image your-username/kinitro-policy:v1 \ + --name my-policy \ --gpu-count 1 \ --min-vram 16 ``` -### Deployment Verification (Spot-Checks) +### Deployment Verification -> **Important**: Your Basilica deployment may be spot-checked to verify it matches your HuggingFace upload. +> **Important**: Your Basilica deployment is verified via the Basilica metadata API. -The evaluation system performs random verification checks to ensure miners are running the same code they uploaded to HuggingFace. During verification: +The evaluation system checks deployments to ensure they are running and using a publicly pullable Docker image. During verification: -1. Your policy is downloaded from HuggingFace -2. Test observations are generated with deterministic seeds -3. Local inference is compared against your Basilica endpoint -4. If outputs don't match, verification fails +1. Deployment state is checked (must be "Running") +2. Docker image is verified to be publicly pullable +3. Public metadata enrollment is checked **To pass verification:** -- Your Basilica deployment must serve the exact same model as your HuggingFace upload -- If your policy uses randomness, support the optional `seed` parameter in your `/act` endpoint (the template already handles this) -- Don't modify your deployment code after uploading to HuggingFace - -**Size limits:** HuggingFace repositories larger than 5GB will be rejected. This limit applies to both uploads and verification downloads. +- Your Docker image must be publicly pullable from a container registry +- Your Basilica deployment must be running and healthy +- Enroll for public metadata (the CLI does this automatically) ## Local Testing (Development Only) @@ -326,9 +314,7 @@ You can then test with a local validator backend by committing your local endpoi ```bash # For LOCAL TESTING ONLY - not valid for mainnet uv run kinitro miner commit \ - --repo your-username/kinitro-policy \ - --revision $(git rev-parse HEAD) \ - --endpoint http://localhost:8001 \ + --deployment-id my-local-deploy \ --netuid 2 \ --network local \ --wallet-name test-wallet \ @@ -339,21 +325,15 @@ uv run kinitro miner commit \ ## Step 7: Commit On-Chain -Register your policy endpoint on-chain so validators can find and evaluate you. +Register your deployment on-chain so validators can find and evaluate you. -The commitment includes three pieces of information: - -- **model**: Your HuggingFace repository (e.g., `your-username/kinitro-policy`) -- **revision**: The HuggingFace commit SHA of your model -- **endpoint**: Your Basilica deployment URL +The commitment stores your **Basilica deployment ID** on-chain. ### Basic Commitment (Endpoint Visible On-Chain) ```bash uv run kinitro miner commit \ - --repo your-username/kinitro-policy \ - --revision YOUR_HUGGINGFACE_COMMIT_SHA \ - --deployment-id YOUR_BASILICA_DEPLOYMENT_UUID \ + --deployment-id YOUR_BASILICA_DEPLOYMENT_ID \ --netuid YOUR_SUBNET_ID \ --network finney \ --wallet-name your-wallet \ @@ -366,9 +346,7 @@ To protect your Basilica endpoint from public disclosure, use encrypted commitme ```bash uv run kinitro miner commit \ - --repo your-username/kinitro-policy \ - --revision YOUR_HUGGINGFACE_COMMIT_SHA \ - --deployment-id YOUR_BASILICA_DEPLOYMENT_UUID \ + --deployment-id YOUR_BASILICA_DEPLOYMENT_ID \ --netuid YOUR_SUBNET_ID \ --network finney \ --wallet-name your-wallet \ @@ -399,24 +377,17 @@ uv run kinitro miner commit \ ### Commitment Format -The commitment is stored on-chain as compact JSON to fit within chain limits: +The commitment is stored on-chain in a compact format: **Plain commitment:** -```json -{"m":"your-username/kinitro-policy","r":"abc123def456...","d":"deployment-uuid"} +``` +deployment-uuid ``` **Encrypted commitment:** -```json -{"m":"your-username/kinitro-policy","r":"abc123def456...","e":""} ``` - -Where: - -- `m` = HuggingFace model repository -- `r` = HuggingFace revision (commit SHA) -- `d` = Basilica deployment ID (UUID) - for plain commitments -- `e` = Encrypted deployment ID (base85 blob) - for encrypted commitments +e: +``` ### Verify Your Commitment @@ -432,9 +403,9 @@ uv run kinitro miner show-commitment \ When you update your model: -1. Upload the new weights to HuggingFace -2. Deploy the updated model to Basilica -3. Commit the new revision and endpoint on-chain +1. Build and push a new Docker image +2. Deploy the updated image to Basilica +3. Commit the new deployment ID on-chain Validators will automatically pick up your new endpoint at the next evaluation cycle. @@ -546,8 +517,7 @@ Key implications: 1. Check your on-chain commitment: `uv run kinitro miner show-commitment --netuid ... --wallet-name ...` 2. Verify your Basilica deployment is running - check the Basilica dashboard 3. Verify your endpoint is accessible: `curl YOUR_BASILICA_ENDPOINT/health` -4. Ensure the revision in your commitment matches the deployed model -5. Check validator logs for errors +4. Check validator logs for errors ### Basilica deployment issues @@ -570,10 +540,9 @@ Key implications: ### Commitment not recognized -- Ensure you're using JSON format (not legacy colon-separated) -- Verify the HuggingFace repo exists and is accessible -- Check that the revision SHA matches your HuggingFace commit -- Commitment must be under ~128 bytes (uses compact JSON with short keys) +- Ensure your deployment ID is correct +- Verify the Basilica deployment is running +- Commitment must be under ~128 bytes ### Testing Endpoints diff --git a/kinitro/api/routes/tasks.py b/kinitro/api/routes/tasks.py index ddfb6a8..4772235 100644 --- a/kinitro/api/routes/tasks.py +++ b/kinitro/api/routes/tasks.py @@ -53,8 +53,6 @@ async def fetch_tasks( miner_uid=MinerUID(t.miner_uid), miner_hotkey=Hotkey(t.miner_hotkey), miner_endpoint=t.miner_endpoint, - miner_repo=t.miner_repo, - miner_revision=t.miner_revision, env_id=EnvironmentId(t.env_id), seed=Seed(t.seed), status=TaskStatus(t.status), diff --git a/kinitro/backend/models.py b/kinitro/backend/models.py index d54a4d8..3a2278b 100644 --- a/kinitro/backend/models.py +++ b/kinitro/backend/models.py @@ -158,8 +158,6 @@ class TaskPoolORM(Base): miner_uid: Mapped[int] = mapped_column(Integer, nullable=False) miner_hotkey: Mapped[str] = mapped_column(String(64), nullable=False) miner_endpoint: Mapped[str] = mapped_column(Text, nullable=False) - miner_repo: Mapped[str | None] = mapped_column(String(256), nullable=True) - miner_revision: Mapped[str | None] = mapped_column(String(64), nullable=True) env_id: Mapped[str] = mapped_column(String(64), nullable=False) seed: Mapped[int] = mapped_column(Integer, nullable=False) status: Mapped[str] = mapped_column( @@ -302,8 +300,6 @@ class Task(BaseModel): miner_uid: MinerUID miner_hotkey: Hotkey miner_endpoint: str - miner_repo: str | None = None # HuggingFace repo for verification - miner_revision: str | None = None # HuggingFace revision for verification env_id: EnvironmentId seed: Seed # Deterministic seed for reproducibility status: TaskStatus @@ -339,14 +335,6 @@ class TaskResult(BaseModel): total_reward: float = Field(default=0.0) timesteps: int = Field(default=0) error: str | None = Field(default=None) - verification_passed: bool | None = Field( - default=None, - description="Whether miner passed model verification (None if not checked)", - ) - verification_score: float | None = Field( - default=None, - description="Match score between deployed and HuggingFace model (0.0 to 1.0)", - ) class TaskSubmitRequest(BaseModel): diff --git a/kinitro/backend/storage.py b/kinitro/backend/storage.py index 4f44d03..4512a17 100644 --- a/kinitro/backend/storage.py +++ b/kinitro/backend/storage.py @@ -361,8 +361,6 @@ async def create_tasks_bulk( "miner_uid": task_data["miner_uid"], "miner_hotkey": task_data["miner_hotkey"], "miner_endpoint": task_data["miner_endpoint"], - "miner_repo": task_data.get("miner_repo"), - "miner_revision": task_data.get("miner_revision"), "env_id": task_data["env_id"], "seed": task_data["seed"], "status": TaskStatus.PENDING.value, diff --git a/kinitro/chain/commitments.py b/kinitro/chain/commitments.py index ecc7100..1a9bff5 100644 --- a/kinitro/chain/commitments.py +++ b/kinitro/chain/commitments.py @@ -55,10 +55,7 @@ class MinerCommitment: uid: MinerUID hotkey: Hotkey - huggingface_repo: str - revision_sha: str deployment_id: str # Basilica deployment ID (UUID, not full URL) - decrypted if encrypted - docker_image: str committed_block: BlockNumber encrypted_deployment: str | None = field(default=None) # Base85 encrypted blob (if encrypted) @@ -83,13 +80,8 @@ def is_encrypted(self) -> bool: @property def is_valid(self) -> bool: - """Check if commitment has all required fields. - - For encrypted commitments, deployment_id may be empty until decrypted. - """ - has_basic_fields = bool(self.huggingface_repo and self.revision_sha) - has_endpoint = bool(self.deployment_id) or bool(self.encrypted_deployment) - return has_basic_fields and has_endpoint + """Check if commitment has all required fields.""" + return bool(self.deployment_id) or bool(self.encrypted_deployment) @property def needs_decryption(self) -> bool: @@ -101,58 +93,62 @@ def parse_commitment(raw: str) -> ParsedCommitment: """ Parse raw commitment string from chain. - Format: "user/repo:rev8char:deployment_id" (plain) - "user/repo:rev8char:e:" (encrypted) + New format: + "deployment_id" (plain) + "e:" (encrypted) - Note: revision is truncated to 8 characters (short SHA). + Legacy format (backward compat): + "user/repo:rev8char:deployment_id" (plain) + "user/repo:rev8char:e:" (encrypted) Args: raw: Raw commitment string Returns: Dict with parsed fields: - - huggingface_repo, revision_sha, docker_image (always) - deployment_id (for plain commitments) - encrypted_deployment (for encrypted commitments) """ parts = raw.split(":", 3) + # New encrypted format: "e:" + if len(parts) >= 2 and parts[0] == "e": + encrypted_blob = raw[2:] # Everything after "e:" + return { + "deployment_id": "", + "encrypted_deployment": encrypted_blob, + } + + # Legacy format: "repo:rev:deployment_id" or "repo:rev:e:" if len(parts) >= 3: - hf_repo = parts[0] - revision = parts[1] third_part = parts[2] - docker_image = f"{hf_repo}:{revision}" - # Check if encrypted (third part is "e" followed by blob in fourth part) + # Legacy encrypted: repo:rev:e: if third_part == "e" and len(parts) >= 4: - # Encrypted format: repo:rev:e: encrypted_blob = parts[3] return { - "huggingface_repo": hf_repo, - "revision_sha": revision, - "deployment_id": "", # Will be decrypted later + "deployment_id": "", "encrypted_deployment": encrypted_blob, - "docker_image": docker_image, } - # Plain format: repo:rev:uuid - deployment_id = third_part + # Legacy plain: repo:rev:deployment_id + return { + "deployment_id": third_part, + "encrypted_deployment": None, + } + + # New plain format: just the deployment_id (no colons) + if len(parts) == 1 and raw: return { - "huggingface_repo": hf_repo, - "revision_sha": revision, - "deployment_id": deployment_id, + "deployment_id": raw, "encrypted_deployment": None, - "docker_image": docker_image, } # Invalid format logger.warning("invalid_commitment_format", raw=raw) return { - "huggingface_repo": "", - "revision_sha": "", "deployment_id": "", "encrypted_deployment": None, - "docker_image": "", } @@ -337,10 +333,7 @@ def read_miner_commitments( commitment = MinerCommitment( uid=MinerUID(uid), hotkey=Hotkey(hotkey), - huggingface_repo=parsed["huggingface_repo"], - revision_sha=parsed["revision_sha"], deployment_id=deployment_id, - docker_image=parsed["docker_image"], committed_block=BlockNumber(committed_block), encrypted_deployment=encrypted_deployment, ) @@ -349,7 +342,6 @@ def read_miner_commitments( logger.debug( "found_commitment", uid=uid, - repo=commitment.huggingface_repo, block=committed_block, encrypted=commitment.is_encrypted, ) @@ -414,10 +406,7 @@ async def read_miner_commitments_async( commitment = MinerCommitment( uid=MinerUID(uid), hotkey=Hotkey(hotkey), - huggingface_repo=parsed["huggingface_repo"], - revision_sha=parsed["revision_sha"], deployment_id=deployment_id, - docker_image=parsed["docker_image"], committed_block=BlockNumber(committed_block), encrypted_deployment=encrypted_deployment, ) @@ -426,7 +415,6 @@ async def read_miner_commitments_async( logger.debug( "found_commitment", uid=uid, - repo=commitment.huggingface_repo, block=committed_block, encrypted=commitment.is_encrypted, ) @@ -473,22 +461,19 @@ def decrypt_commitments( def _build_commitment_data( - repo: str, - revision: str, deployment_id: str, backend_public_key: str | None = None, ) -> str | None: - """Build the colon-separated commitment string. + """Build the commitment string. + + Plain format: ``deployment_id`` + Encrypted format: ``e:`` Returns the commitment data string, or None if validation/encryption fails. """ - revision_short = revision[:8] - - if ":" in repo or ":" in revision_short or ":" in deployment_id: + if ":" in deployment_id: logger.error( "commitment_field_contains_colon", - repo=repo, - revision=revision_short, deployment_id=deployment_id, ) return None @@ -496,7 +481,7 @@ def _build_commitment_data( if backend_public_key: try: encrypted_blob = encrypt_deployment_id(deployment_id, backend_public_key) - commitment_data = f"{repo}:{revision_short}:e:{encrypted_blob}" + commitment_data = f"e:{encrypted_blob}" logger.info( "commitment_encrypted", data_length=len(commitment_data), @@ -506,7 +491,7 @@ def _build_commitment_data( logger.exception("encryption_failed", error=str(e)) return None else: - commitment_data = f"{repo}:{revision_short}:{deployment_id}" + commitment_data = deployment_id logger.info("commitment_data", data=commitment_data, length=len(commitment_data)) if len(commitment_data) > MAX_COMMITMENT_SIZE: @@ -514,7 +499,6 @@ def _build_commitment_data( "commitment_too_large", size=len(commitment_data), max_size=MAX_COMMITMENT_SIZE, - repo_length=len(repo), ) return None @@ -525,29 +509,22 @@ def commit_model( subtensor: Subtensor, wallet: Wallet, netuid: int, - repo: str, - revision: str, deployment_id: str, backend_public_key: str | None = None, ) -> bool: """ - Commit model info to chain using compact colon-separated format. + Commit deployment info to chain. - This is called by miners to register their model. + This is called by miners to register their deployment. Format: - - Plain: "user/repo:rev8char:uuid" (~67 bytes for 30-char repo) - - Encrypted: "user/repo:rev8char:e:" (~121 bytes for 30-char repo) - - The 128-byte chain limit allows repo names up to ~37 chars for encrypted - commitments or ~97 chars for plain commitments. + - Plain: "deployment_id" + - Encrypted: "e:" Args: subtensor: Bittensor subtensor connection wallet: Miner's wallet netuid: Subnet UID - repo: HuggingFace repository (user/model), max ~37 chars for encrypted mode - revision: Commit SHA (will be truncated to 8 chars) deployment_id: Basilica deployment ID (UUID only, not full URL) backend_public_key: Optional hex-encoded X25519 public key for encrypting endpoint. If provided, the deployment_id will be encrypted so only @@ -556,7 +533,7 @@ def commit_model( Returns: True if commitment succeeded """ - commitment_data = _build_commitment_data(repo, revision, deployment_id, backend_public_key) + commitment_data = _build_commitment_data(deployment_id, backend_public_key) if commitment_data is None: return False @@ -573,8 +550,6 @@ def commit_model( if success: logger.info( "commitment_submitted", - repo=repo, - revision=revision[:8], deployment_id=deployment_id[:8] + "..." if deployment_id else None, encrypted=bool(backend_public_key), ) @@ -588,8 +563,6 @@ async def commit_model_async( subtensor: AsyncSubtensor, wallet: Wallet, netuid: int, - repo: str, - revision: str, deployment_id: str, backend_public_key: str | None = None, ) -> bool: @@ -597,7 +570,7 @@ async def commit_model_async( Uses :class:`AsyncSubtensor` for non-blocking chain I/O. """ - commitment_data = _build_commitment_data(repo, revision, deployment_id, backend_public_key) + commitment_data = _build_commitment_data(deployment_id, backend_public_key) if commitment_data is None: return False @@ -614,8 +587,6 @@ async def commit_model_async( if success: logger.info( "commitment_submitted", - repo=repo, - revision=revision[:8], deployment_id=deployment_id[:8] + "..." if deployment_id else None, encrypted=bool(backend_public_key), ) diff --git a/kinitro/cli/miner/__init__.py b/kinitro/cli/miner/__init__.py index ed09daf..b6ce80c 100644 --- a/kinitro/cli/miner/__init__.py +++ b/kinitro/cli/miner/__init__.py @@ -8,6 +8,7 @@ from .deploy import basilica_push, miner_deploy from .mock import mock from .template import init_miner +from .verify import verify def build( @@ -71,5 +72,6 @@ def build( miner_app.command(name="push")(basilica_push) miner_app.command(name="deploy")(miner_deploy) miner_app.command()(mock) +miner_app.command()(verify) __all__ = ["miner_app"] diff --git a/kinitro/cli/miner/commitment.py b/kinitro/cli/miner/commitment.py index b6d3109..defd5dc 100644 --- a/kinitro/cli/miner/commitment.py +++ b/kinitro/cli/miner/commitment.py @@ -19,8 +19,6 @@ async def _commit_async( wallet_name: str, hotkey_name: str, netuid: int, - repo: str, - revision: str, deployment_id: str, encrypt: bool, backend_public_key: str | None, @@ -32,8 +30,6 @@ async def _commit_async( subtensor=subtensor, wallet=wallet, netuid=netuid, - repo=repo, - revision=revision, deployment_id=deployment_id, backend_public_key=backend_public_key if encrypt else None, ) @@ -63,8 +59,6 @@ async def _get_neurons_hotkey_async( def commit( - repo: str = typer.Option(..., help="HuggingFace repo (user/model)"), - revision: str = typer.Option(..., help="Commit SHA"), deployment_id: str = typer.Option( ..., "--deployment-id", "-d", help="Basilica deployment ID (UUID only)" ), @@ -93,7 +87,7 @@ def commit( ), ): """ - Commit model to chain. + Commit deployment to chain. Registers your policy so validators can evaluate it. @@ -107,14 +101,14 @@ def commit( Example: # Plain commitment (endpoint visible on-chain) - kinitro miner commit --repo user/policy --revision abc123 --deployment-id UUID --netuid 1 + kinitro miner commit --deployment-id UUID --netuid 1 # Encrypted commitment using backend hotkey (recommended) - kinitro miner commit --repo user/policy --revision abc123 --deployment-id UUID \\ + kinitro miner commit --deployment-id UUID \\ --netuid 1 --encrypt --backend-hotkey 5Dxxx... # Encrypted commitment using explicit public key - kinitro miner commit --repo user/policy --revision abc123 --deployment-id UUID \\ + kinitro miner commit --deployment-id UUID \\ --netuid 1 --encrypt --backend-public-key """ # Validate encryption options @@ -153,9 +147,7 @@ def commit( ) raise typer.Exit(1) - typer.echo(f"Committing model to {network} (netuid={netuid})") - typer.echo(f" Repo: {repo}") - typer.echo(f" Revision: {revision}") + typer.echo(f"Committing deployment to {network} (netuid={netuid})") typer.echo(f" Deployment ID: {deployment_id}") if encrypt: typer.echo(" Encryption: ENABLED") @@ -169,8 +161,6 @@ def commit( wallet_name=wallet_name, hotkey_name=hotkey_name, netuid=netuid, - repo=repo, - revision=revision, deployment_id=deployment_id, encrypt=encrypt, backend_public_key=backend_public_key, @@ -230,19 +220,15 @@ def show_commitment( if block is not None: typer.echo(f"Committed at block: {block}") - # Parse the commitment (supports both JSON and legacy formats) + # Parse the commitment parsed = parse_commitment(raw) - if parsed["huggingface_repo"]: + if parsed["deployment_id"] or parsed.get("encrypted_deployment"): typer.echo("\nParsed commitment:") - typer.echo(f" Repo: {parsed['huggingface_repo']}") - typer.echo(f" Revision: {parsed['revision_sha']}") encrypted_blob = parsed.get("encrypted_deployment") if encrypted_blob: typer.echo(" Encrypted: YES") typer.echo(f" Encrypted Blob: {encrypted_blob[:40]}...") else: typer.echo(f" Deployment ID: {parsed['deployment_id']}") - if parsed["docker_image"]: - typer.echo(f" Docker Image: {parsed['docker_image']}") else: typer.echo("\nCould not parse commitment format.") diff --git a/kinitro/cli/miner/deploy.py b/kinitro/cli/miner/deploy.py index ea8a202..1591b2d 100644 --- a/kinitro/cli/miner/deploy.py +++ b/kinitro/cli/miner/deploy.py @@ -7,70 +7,9 @@ from basilica import BasilicaClient from bittensor import AsyncSubtensor from bittensor_wallet import Wallet -from huggingface_hub import HfApi from kinitro.chain.commitments import commit_model_async -# Shared deployment configuration -PIP_PACKAGES = [ - "fastapi", - "uvicorn", - "numpy", - "huggingface-hub", - "pydantic", - "pillow", -] - - -def _get_deployment_source(repo: str, revision: str) -> str: - """Generate the deployment source code template.""" - return f""" -import os -import sys -import subprocess - -print("Starting Kinitro Policy Server...") -print(f"HF_REPO: {{os.environ.get('HF_REPO', 'not set')}}") -print(f"HF_REVISION: {{os.environ.get('HF_REVISION', 'not set')}}") - -# Download model from HuggingFace -from huggingface_hub import snapshot_download - -hf_token = os.environ.get("HF_TOKEN") or None -print("Downloading model from HuggingFace...") -snapshot_download( - "{repo}", - revision="{revision}", - local_dir="/app", - token=hf_token, -) -print("Model downloaded successfully!") - -# Change to /app directory and add to Python path -os.chdir("/app") -sys.path.insert(0, "/app") - -# Start the FastAPI server from /app directory -print("Starting uvicorn server on port 8000...") -subprocess.run( - [ - sys.executable, "-m", "uvicorn", - "server:app", - "--host", "0.0.0.0", - "--port", "8000", - ], - cwd="/app", - check=True, -) -""" - - -def _get_docker_image(gpu_count: int) -> str: - """Choose Docker image based on GPU requirement.""" - if gpu_count > 0: - return "pytorch/pytorch:2.1.0-cuda12.1-cudnn8-runtime" - return "python:3.11-slim" - def _extract_deployment_id(deployment) -> str: """Extract deployment ID from deployment URL or fall back to name.""" @@ -87,11 +26,13 @@ def _extract_deployment_id(deployment) -> str: def basilica_push( - repo: str = typer.Option(..., "--repo", "-r", help="HuggingFace repository ID"), - revision: str = typer.Option(..., "--revision", help="HuggingFace commit SHA"), - deployment_name: str | None = typer.Option( - None, "--name", "-n", help="Deployment name (default: derived from repo)" + image: str = typer.Option( + ..., + "--image", + "-i", + help="Docker image to deploy (e.g., user/policy:v1)", ), + deployment_name: str = typer.Option(..., "--name", "-n", help="Deployment name"), gpu_count: int = typer.Option(0, "--gpu-count", help="Number of GPUs (0 for CPU-only)"), min_gpu_memory_gb: int | None = typer.Option(None, "--min-vram", help="Minimum GPU VRAM in GB"), cpu: str = typer.Option("1", "--cpu", help="CPU allocation (e.g., '1', '2', '500m', '4000m')"), @@ -101,27 +42,19 @@ def basilica_push( basilica_api_token: str | None = typer.Option( None, "--api-token", envvar="BASILICA_API_TOKEN", help="Basilica API token" ), - hf_token: str | None = typer.Option( - None, "--hf-token", envvar="HF_TOKEN", help="HuggingFace token for private repos" - ), timeout: int = typer.Option(600, "--timeout", help="Deployment timeout in seconds"), ): """ - Deploy policy to Basilica. - - Deploys your robotics policy server to Basilica's GPU serverless platform. - The policy is downloaded from HuggingFace and served via FastAPI. + Deploy a Docker image to Basilica. + Requires --image and --name. Requires BASILICA_API_TOKEN environment variable or --api-token. Example: - kinitro miner push --repo user/policy --revision abc123 + kinitro miner push --image user/policy:v1 --name my-policy - # With custom name - kinitro miner push --repo user/policy --revision abc123 --name my-policy - - # With more GPU memory - kinitro miner push --repo user/policy --revision abc123 --min-vram 24 + # With GPU + kinitro miner push --image user/policy:v1 --name my-policy --gpu-count 1 --min-vram 16 """ # Validate required credentials api_token = basilica_api_token or os.environ.get("BASILICA_API_TOKEN") @@ -132,55 +65,44 @@ def basilica_push( typer.echo("\nTo get a token, run: basilica tokens create") raise typer.Exit(1) - # Derive deployment name from repo - name = deployment_name or repo.replace("/", "-").lower() - # Ensure name is DNS-safe (lowercase, alphanumeric and hyphens only) - name = "".join(c if c.isalnum() or c == "-" else "-" for c in name).strip("-")[:63] + client = BasilicaClient(api_key=api_token) + + name = "".join(c if c.isalnum() or c == "-" else "-" for c in deployment_name).strip("-")[:63] typer.echo("Deploying to Basilica:") - typer.echo(f" Repo: {repo}") - typer.echo(f" Revision: {revision[:12]}...") + typer.echo(f" Image: {image}") typer.echo(f" Deployment Name: {name}") vram_str = f" (min {min_gpu_memory_gb}GB VRAM)" if min_gpu_memory_gb else "" typer.echo(f" GPU: {gpu_count}x{vram_str}") typer.echo(f" Memory: {memory}") - # Create client and build deployment configuration - client = BasilicaClient(api_key=api_token) - source_code = _get_deployment_source(repo, revision) - - # Note: kinitro is NOT included in pip_packages because the miner template - # is self-contained (includes rl_interface.py locally). This avoids - # dependency conflicts and keeps the deployment lightweight. - env_vars: dict[str, str] = {"HF_REPO": repo, "HF_REVISION": revision} - if hf_token: - env_vars["HF_TOKEN"] = hf_token - - deploy_kwargs: dict = { - "name": name, - "source": source_code, - "image": _get_docker_image(gpu_count), - "port": 8000, - "env": env_vars, - "cpu": cpu, - "memory": memory, - "pip_packages": PIP_PACKAGES, - "timeout": timeout, - } - - if gpu_count > 0: - deploy_kwargs["gpu_count"] = gpu_count - if min_gpu_memory_gb is not None: - deploy_kwargs["min_gpu_memory_gb"] = min_gpu_memory_gb - - # Deploy typer.echo("\nDeploying to Basilica (this may take several minutes)...") try: - deployment = client.deploy(**deploy_kwargs) + gpu_kwargs: dict = {} + if gpu_count > 0: + gpu_kwargs["gpu_count"] = gpu_count + if min_gpu_memory_gb is not None: + gpu_kwargs["min_gpu_memory_gb"] = min_gpu_memory_gb + deployment = client.deploy( + name=name, + image=image, + port=8000, + cpu=cpu, + memory=memory, + timeout=timeout, + **gpu_kwargs, + ) except Exception as e: typer.echo(f"\nDeployment failed: {e}", err=True) raise typer.Exit(1) + # Enroll for public metadata so validators can verify the deployment + try: + client.enroll_metadata(deployment.name, enabled=True) + typer.echo(" Public metadata enrolled for validator verification.") + except Exception as e: + typer.echo(f" Warning: Could not enroll public metadata: {e}", err=True) + deploy_id = _extract_deployment_id(deployment) typer.echo("\n" + "=" * 60) @@ -190,29 +112,19 @@ def basilica_push( typer.echo(f" URL: {deployment.url}") typer.echo(f" State: {deployment.state}") typer.echo("=" * 60) + typer.echo("\nNext step - commit on-chain:") - typer.echo(f" kinitro miner commit --repo {repo} --revision {revision} \\") - typer.echo(f" --deployment-id {deploy_id} --netuid YOUR_NETUID") + typer.echo(f" kinitro miner commit --deployment-id {deploy_id} --netuid YOUR_NETUID") def miner_deploy( - repo: str = typer.Option(..., "--repo", "-r", help="HuggingFace repository ID"), - policy_path: str | None = typer.Option( - None, "--path", "-p", help="Path to local policy directory" - ), - revision: str | None = typer.Option( - None, "--revision", help="HuggingFace commit SHA (required if --skip-upload)" - ), + image: str = typer.Option(..., "--image", "-i", help="Docker image to deploy"), deployment_id: str | None = typer.Option( - None, "--deployment-id", "-d", help="Basilica deployment ID (required if --skip-deploy)" + None, "--deployment-id", "-d", help="Basilica deployment ID (skip deploy step)" ), deployment_name: str | None = typer.Option( - None, "--name", "-n", help="Deployment name (default: derived from repo)" + None, "--name", "-n", help="Deployment name (default: derived from image)" ), - message: str = typer.Option( - "Model update", "--message", "-m", help="Commit message for HuggingFace upload" - ), - skip_upload: bool = typer.Option(False, "--skip-upload", help="Skip HuggingFace upload"), skip_deploy: bool = typer.Option(False, "--skip-deploy", help="Skip Basilica deployment"), skip_commit: bool = typer.Option(False, "--skip-commit", help="Skip on-chain commit"), dry_run: bool = typer.Option( @@ -227,7 +139,6 @@ def miner_deploy( basilica_api_token: str | None = typer.Option( None, "--api-token", envvar="BASILICA_API_TOKEN", help="Basilica API token" ), - hf_token: str | None = typer.Option(None, "--hf-token", envvar="HF_TOKEN"), gpu_count: int = typer.Option(0, "--gpu-count", help="Number of GPUs (0 for CPU-only)"), min_gpu_memory_gb: int | None = typer.Option(None, "--min-vram", help="Minimum GPU VRAM in GB"), cpu: str = typer.Option("1", "--cpu", help="CPU allocation (e.g., '1', '2', '500m', '4000m')"), @@ -237,36 +148,23 @@ def miner_deploy( timeout: int = typer.Option(600, "--timeout", help="Deployment timeout in seconds"), ): """ - One-command deployment: Upload -> Deploy -> Commit. + One-command deployment: Deploy -> Commit. Combines the miner deployment workflow into a single command: - 1. Upload policy to HuggingFace (skip with --skip-upload) - 2. Deploy to Basilica (skip with --skip-deploy) - 3. Commit on-chain (skip with --skip-commit) + 1. Deploy Docker image to Basilica (skip with --skip-deploy) + 2. Commit on-chain (skip with --skip-commit) Examples: # Full deployment - kinitro miner deploy -r user/policy -p ./my-policy --netuid 123 - - # Skip upload (already on HuggingFace) - kinitro miner deploy -r user/policy --skip-upload --revision abc123 --netuid 123 + kinitro miner deploy --image user/policy:v1 --netuid 123 # Skip deployment (already deployed) - kinitro miner deploy -r user/policy --skip-upload --revision abc123 \ - --skip-deploy --deployment-id https://my-policy.basilica.ai --netuid 123 + kinitro miner deploy --image user/policy:v1 --skip-deploy \\ + --deployment-id my-deploy-id --netuid 123 # Dry run to see what would happen - kinitro miner deploy -r user/policy -p ./my-policy --netuid 123 --dry-run + kinitro miner deploy --image user/policy:v1 --netuid 123 --dry-run """ - # Validate arguments - if not skip_upload and not policy_path: - typer.echo("Error: --path is required unless --skip-upload is set", err=True) - raise typer.Exit(1) - - if skip_upload and not revision: - typer.echo("Error: --revision is required when --skip-upload is set", err=True) - raise typer.Exit(1) - if skip_deploy and not deployment_id: typer.echo("Error: --deployment-id is required when --skip-deploy is set", err=True) raise typer.Exit(1) @@ -277,25 +175,16 @@ def miner_deploy( # Get credentials from env if not provided api_token = basilica_api_token or os.environ.get("BASILICA_API_TOKEN") - hf = hf_token or os.environ.get("HF_TOKEN") # Validate credentials - if not dry_run: - if not skip_upload and not hf: - typer.echo("Error: HF_TOKEN not configured", err=True) - raise typer.Exit(1) - if not skip_deploy and not api_token: - typer.echo("Error: BASILICA_API_TOKEN not configured", err=True) - typer.echo("Set it via --api-token or BASILICA_API_TOKEN environment variable") - typer.echo("\nTo get a token, run: basilica tokens create") - raise typer.Exit(1) - - revision_value = revision + if not dry_run and not skip_deploy and not api_token: + typer.echo("Error: BASILICA_API_TOKEN not configured", err=True) + typer.echo("Set it via --api-token or BASILICA_API_TOKEN environment variable") + typer.echo("\nTo get a token, run: basilica tokens create") + raise typer.Exit(1) # Determine steps steps = [] - if not skip_upload: - steps.append("upload") if not skip_deploy: steps.append("deploy") if not skip_commit: @@ -304,11 +193,7 @@ def miner_deploy( typer.echo("=" * 60) typer.echo("KINITRO DEPLOYMENT") typer.echo("=" * 60) - typer.echo(f" Repository: {repo}") - if policy_path: - typer.echo(f" Policy Path: {policy_path}") - if revision_value: - typer.echo(f" Revision: {revision_value}") + typer.echo(f" Image: {image}") if deployment_id: typer.echo(f" Deployment ID: {deployment_id}") typer.echo(f" Steps: {' -> '.join(steps) if steps else 'none'}") @@ -316,112 +201,29 @@ def miner_deploy( typer.echo(" Mode: DRY RUN") typer.echo("=" * 60) - # Maximum allowed repo size (same as verification limit, configurable via env var) - max_repo_size_gb = float(os.environ.get("KINITRO_MAX_REPO_SIZE_GB", "5.0")) - max_repo_size_bytes = int(max_repo_size_gb * 1024 * 1024 * 1024) - - # Step 1: Upload to HuggingFace - if not skip_upload: - typer.echo(f"\n[1/{len(steps)}] Uploading to HuggingFace ({repo})...") - - if dry_run: - typer.echo(f" [DRY RUN] Would upload {policy_path} to {repo}") - revision_value = "dry-run-revision" - else: - try: - if policy_path is None: - raise typer.Exit(1) - policy_path_value = policy_path - - # Check local folder size before uploading - total_size = 0 - for dirpath, dirnames, filenames in os.walk(policy_path_value): - for filename in filenames: - filepath = os.path.join(dirpath, filename) - if os.path.isfile(filepath): - total_size += os.path.getsize(filepath) - - if total_size > max_repo_size_bytes: - typer.echo( - f"\nError: Folder size {total_size / (1024 * 1024):.2f}MB exceeds limit of {max_repo_size_gb}GB", - err=True, - ) - typer.echo("Please reduce your policy folder size to stay within the limit.") - raise typer.Exit(1) - - typer.echo( - f" Folder size: {total_size / (1024 * 1024):.2f}MB (max: {max_repo_size_gb}GB)" - ) - - api = HfApi(token=hf) - - # Create repo if it doesn't exist - try: - api.create_repo(repo_id=repo, exist_ok=True, private=False) - typer.echo(f" Repository ready: {repo}") - except Exception: - typer.echo(f" Repository already exists or created: {repo}") - - # Upload folder - typer.echo(f" Uploading from {policy_path_value}...") - result = api.upload_folder( - folder_path=policy_path_value, - repo_id=repo, - commit_message=message, - ) - revision_value = result.commit_url.split("/")[-1] - typer.echo(" Upload successful!") - typer.echo(f" Revision: {revision_value}") - - except typer.Exit: - raise - except Exception as e: - typer.echo(f"\nUpload failed: {e}", err=True) - raise typer.Exit(1) - else: - if revision_value is None: - typer.echo("Error: --revision is required when --skip-upload is set", err=True) - raise typer.Exit(1) - typer.echo(f"\nSkipping upload, using revision: {revision_value[:12]}...") - - # Step 2: Deploy to Basilica + # Step 1: Deploy to Basilica if not skip_deploy: - step_num = 2 if not skip_upload else 1 + step_num = 1 typer.echo(f"\n[{step_num}/{len(steps)}] Deploying to Basilica...") - if revision_value is None: - typer.echo("Error: revision is required for deployment", err=True) - raise typer.Exit(1) - if dry_run: - typer.echo(f" [DRY RUN] Would deploy {repo}@{revision_value[:12]}...") + typer.echo(f" [DRY RUN] Would deploy {image}") deployment_id = "dry-run-deployment-id" else: - # Derive deployment name - name = deployment_name or repo.replace("/", "-").lower() + # Derive deployment name from image + name = deployment_name or image.split(":", maxsplit=1)[0].replace("/", "-").lower() name = "".join(c if c.isalnum() or c == "-" else "-" for c in name).strip("-")[:63] typer.echo(f" Deployment Name: {name}") - # Create client and build deployment configuration client = BasilicaClient(api_key=api_token) - source_code = _get_deployment_source(repo, revision_value) - - # Note: kinitro is NOT included in pip_packages because the miner template - # is self-contained (includes rl_interface.py locally). - env_vars: dict[str, str] = {"HF_REPO": repo, "HF_REVISION": revision_value} - if hf: - env_vars["HF_TOKEN"] = hf deploy_kwargs: dict = { "name": name, - "source": source_code, - "image": _get_docker_image(gpu_count), + "image": image, "port": 8000, - "env": env_vars, "cpu": cpu, "memory": memory, - "pip_packages": PIP_PACKAGES, "timeout": timeout, } @@ -430,7 +232,6 @@ def miner_deploy( if min_gpu_memory_gb is not None: deploy_kwargs["min_gpu_memory_gb"] = min_gpu_memory_gb - # Deploy typer.echo(" Deploying (this may take several minutes)...") try: deployment = client.deploy(**deploy_kwargs) @@ -441,28 +242,30 @@ def miner_deploy( deployment_id = _extract_deployment_id(deployment) typer.echo(f" Deployment ID: {deployment_id}") + + # Enroll for public metadata so validators can verify the deployment + try: + client.enroll_metadata(deployment.name, enabled=True) + typer.echo(" Public metadata enrolled for validator verification.") + except Exception as e: + typer.echo(f" Warning: Could not enroll public metadata: {e}", err=True) except Exception as e: typer.echo(f"\nDeployment failed: {e}", err=True) raise typer.Exit(1) else: typer.echo(f"\nSkipping deployment, using deployment ID: {deployment_id}") - # Step 3: Commit on-chain + # Step 2: Commit on-chain if not skip_commit and deployment_id: step_num = len(steps) typer.echo(f"\n[{step_num}/{len(steps)}] Committing on-chain...") - if revision_value is None: - typer.echo("Error: revision is required for on-chain commit", err=True) - raise typer.Exit(1) if netuid is None: typer.echo("Error: netuid is required for on-chain commit", err=True) raise typer.Exit(1) if dry_run: - typer.echo( - f" [DRY RUN] Would commit {repo}@{revision_value[:12]}... with deployment_id {deployment_id}" - ) + typer.echo(f" [DRY RUN] Would commit deployment_id {deployment_id}") else: wallet = Wallet(name=wallet_name, hotkey=hotkey_name) typer.echo(f" Wallet: {wallet.hotkey.ss58_address[:16]}...") @@ -473,8 +276,6 @@ async def _commit_on_chain() -> bool: subtensor=subtensor, wallet=wallet, netuid=netuid, - repo=repo, - revision=revision_value, deployment_id=deployment_id, ) @@ -493,8 +294,6 @@ async def _commit_on_chain() -> bool: else: typer.echo("DEPLOYMENT COMPLETE") typer.echo("=" * 60) - typer.echo(f" Repository: {repo}") - revision_summary = revision_value[:12] if revision_value else "N/A" - typer.echo(f" Revision: {revision_summary}...") + typer.echo(f" Image: {image}") typer.echo(f" Deployment ID: {deployment_id or 'N/A'}") typer.echo("=" * 60) diff --git a/kinitro/cli/miner/template.py b/kinitro/cli/miner/template.py index 08c8dc2..dad0b88 100644 --- a/kinitro/cli/miner/template.py +++ b/kinitro/cli/miner/template.py @@ -54,8 +54,11 @@ def init_miner( typer.echo(" 1. Edit policy.py to implement your policy") typer.echo(" 2. Add your model weights to the directory") typer.echo(" 3. Test locally: uvicorn server:app --port 8001") - typer.echo(" 4. Upload to HuggingFace: huggingface-cli upload user/repo .") - typer.echo(" 5. Deploy to Basilica: kinitro miner push --repo user/repo --revision SHA") + typer.echo(" 4. Build Docker image: docker build -t user/policy:v1 .") + typer.echo(" 5. Push to registry: docker push user/policy:v1") typer.echo( - " 6. Or use one-command deploy: kinitro miner deploy -r user/repo -p . --netuid ..." + " 6. Deploy to Basilica: kinitro miner push --image user/policy:v1 --name my-policy" + ) + typer.echo( + " 7. Or use one-command deploy: kinitro miner deploy --image user/policy:v1 --netuid ..." ) diff --git a/kinitro/cli/miner/verify.py b/kinitro/cli/miner/verify.py new file mode 100644 index 0000000..b33a7f7 --- /dev/null +++ b/kinitro/cli/miner/verify.py @@ -0,0 +1,140 @@ +"""Standalone metadata verification command for miners.""" + +import asyncio + +import typer +from bittensor import AsyncSubtensor + +from kinitro.chain.commitments import ( + MinerCommitment, + _query_commitment_by_hotkey_async, + parse_commitment, +) +from kinitro.executor.verification import MetadataVerifier +from kinitro.types import BlockNumber, Hotkey, MinerUID + + +async def _get_hotkey_for_uid(network: str, netuid: int, uid: int) -> str | None: + """Look up a hotkey by UID using AsyncSubtensor.""" + async with AsyncSubtensor(network=network) as subtensor: + neurons = await subtensor.neurons(netuid=netuid) + if uid < 0 or uid >= len(neurons): + return None + return neurons[uid].hotkey + + +async def _read_commitment_from_chain( + network: str, netuid: int, hotkey: str +) -> MinerCommitment | None: + """Read and parse a miner's commitment from chain.""" + async with AsyncSubtensor(network=network) as subtensor: + raw, block = await _query_commitment_by_hotkey_async(subtensor, netuid, hotkey) + + if not raw: + return None + + parsed = parse_commitment(raw) + if not parsed["deployment_id"] and not parsed.get("encrypted_deployment"): + return None + + return MinerCommitment( + uid=MinerUID(0), + hotkey=Hotkey(hotkey), + deployment_id=parsed["deployment_id"], + committed_block=BlockNumber(block if block is not None else 0), + encrypted_deployment=parsed.get("encrypted_deployment"), + ) + + +def _print_result(result) -> None: + """Print a MetadataVerificationResult in a human-readable format.""" + status = "VERIFIED" if result.verified else "FAILED" + typer.echo(f"\nVerification: {status}") + typer.echo(f" Deployment ID: {result.deployment_id}") + if result.state is not None: + typer.echo(f" State: {result.state}") + if result.image is not None: + tag_str = f":{result.image_tag}" if result.image_tag else "" + typer.echo(f" Image: {result.image}{tag_str}") + if result.image_public is not None: + typer.echo(f" Image Public: {result.image_public}") + if result.uptime_seconds is not None: + typer.echo(f" Uptime: {result.uptime_seconds:.0f}s") + if result.failure_reason: + typer.echo(f" Failure Reason: {result.failure_reason}") + if result.error: + typer.echo(f" Error: {result.error}") + + +def verify( + deployment_id: str | None = typer.Option( + None, "--deployment-id", "-d", help="Basilica deployment name to verify directly" + ), + netuid: int | None = typer.Option(None, "--netuid", help="Subnet UID (for chain lookup)"), + uid: int | None = typer.Option(None, "--uid", help="Miner UID (requires --netuid)"), + hotkey: str | None = typer.Option( + None, "--hotkey", help="Miner hotkey SS58 address (requires --netuid)" + ), + network: str = typer.Option("finney", "--network", help="Bittensor network"), +): + """ + Verify a miner's Basilica deployment metadata. + + Two modes: + + 1. Direct: Verify a specific deployment by its Basilica deployment name. + kinitro miner verify --deployment-id my-deployment + + 2. Chain: Read a miner's commitment from chain, then verify the deployment. + kinitro miner verify --netuid 1 --uid 5 + kinitro miner verify --netuid 1 --hotkey 5Dxxx... + """ + if deployment_id: + # Direct mode: verify a specific deployment + commitment = MinerCommitment( + uid=MinerUID(0), + hotkey=Hotkey(""), + deployment_id=deployment_id, + committed_block=BlockNumber(0), + ) + typer.echo(f"Verifying deployment: {deployment_id}") + + elif netuid is not None and (uid is not None or hotkey is not None): + # Chain mode: read commitment then verify + if hotkey: + query_hotkey = hotkey + typer.echo(f"Looking up commitment for hotkey {hotkey[:16]}... on netuid {netuid}") + elif uid is not None: + typer.echo(f"Looking up hotkey for UID {uid} on netuid {netuid}...") + query_hotkey = asyncio.run(_get_hotkey_for_uid(network, netuid, uid)) + if not query_hotkey: + typer.echo(f"Error: UID {uid} not found on subnet {netuid}", err=True) + raise typer.Exit(1) + typer.echo(f" Hotkey: {query_hotkey[:16]}...") + else: + typer.echo("Error: --uid or --hotkey required with --netuid", err=True) + raise typer.Exit(1) + + typer.echo("Reading commitment from chain...") + commitment = asyncio.run(_read_commitment_from_chain(network, netuid, query_hotkey)) + if not commitment: + typer.echo("Error: No valid commitment found on chain", err=True) + raise typer.Exit(1) + + typer.echo(f" Deployment ID: {commitment.deployment_id}") + + else: + typer.echo( + "Error: Provide --deployment-id, or --netuid with --uid/--hotkey", + err=True, + ) + raise typer.Exit(1) + + # Run verification + typer.echo("Running metadata verification...") + verifier = MetadataVerifier() + result = asyncio.run(verifier.verify_miner(commitment)) + _print_result(result) + + if not result.verified: + raise typer.Exit(1) diff --git a/kinitro/config.py b/kinitro/config.py index 7d39ef8..6456938 100644 --- a/kinitro/config.py +++ b/kinitro/config.py @@ -51,8 +51,6 @@ class MinerConfig(NetworkConfig): """Miner-specific configuration.""" # Model settings - huggingface_repo: str | None = Field(default=None, description="HuggingFace model repo") - model_revision: str | None = Field(default=None, description="Model revision/commit SHA") deployment_id: str | None = Field(default=None, description="Basilica deployment ID") # Docker settings diff --git a/kinitro/executor/config.py b/kinitro/executor/config.py index b764bff..8617f35 100644 --- a/kinitro/executor/config.py +++ b/kinitro/executor/config.py @@ -124,38 +124,6 @@ def normalize_mem_limit(cls, v: str) -> str: # Logging log_level: str = Field(default="INFO", description="Logging level") - # Model verification settings - verification_enabled: bool = Field( - default=True, - description="Enable spot-check verification of miner models", - ) - verification_rate: float = Field( - default=0.05, - ge=0.0, - le=1.0, - description="Probability of verifying each miner (0.0 to 1.0)", - ) - verification_tolerance: float = Field( - default=1e-3, - description="Relative tolerance for comparing actions", - ) - verification_samples: int = Field( - default=5, - ge=1, - le=20, - description="Number of test observations per verification", - ) - verification_cache_dir: str | None = Field( - default=None, - description="Directory to cache downloaded HuggingFace models", - ) - verification_max_repo_size_gb: float = Field( - default=5.0, - ge=0.1, - le=50.0, - description="Maximum allowed HuggingFace repo size in GB", - ) - # Concurrent executor settings use_concurrent_executor: bool = Field( default=False, diff --git a/kinitro/executor/verification.py b/kinitro/executor/verification.py index 2a4f3c3..e34d867 100644 --- a/kinitro/executor/verification.py +++ b/kinitro/executor/verification.py @@ -1,436 +1,327 @@ -""" -Model verification module for spot-checking miner deployments. - -This module verifies that what miners deploy to Basilica matches what they -uploaded to HuggingFace. It works by: +"""Deployment metadata verification for miner Basilica deployments. -1. Downloading the policy from HuggingFace -2. Running inference locally with a fixed seed -3. Comparing against the miner's endpoint response +Uses the Basilica public metadata API to verify that miner deployments +are running with publicly pullable Docker images. This replaces the previous +spot-check system that required downloading full HuggingFace repos. -If outputs differ significantly, the miner may be running different code -than what they committed. +Verification checks: +1. Deployment exists and metadata is publicly accessible +2. Deployment is in a healthy state (Active/Running) +3. Docker image is publicly pullable from its container registry """ +from __future__ import annotations + import asyncio -import hashlib -import importlib.util -import os -import random -import shutil -import sys -import tempfile +import re from dataclasses import dataclass -from pathlib import Path -from typing import Any import httpx -import numpy as np import structlog -from huggingface_hub import HfApi, snapshot_download +from basilica import BasilicaClient -from kinitro.rl_interface import Action, Observation, ProprioKeys -from kinitro.types import Hotkey, MinerUID, VerificationDetails +from kinitro.chain.commitments import MinerCommitment +from kinitro.types import Hotkey, MinerUID logger = structlog.get_logger() +# Deployment states considered healthy +HEALTHY_STATES: frozenset[str] = frozenset({"Active", "Running"}) + +# Docker Hub registry host used when no registry is specified in the image ref +DOCKER_HUB_REGISTRY = "registry-1.docker.io" +DOCKER_HUB_AUTH_URL = "https://auth.docker.io/token" + +# Accept header for Docker Registry HTTP V2 manifest requests +_MANIFEST_ACCEPT = ( + "application/vnd.docker.distribution.manifest.v2+json, " + "application/vnd.oci.image.manifest.v1+json" +) + @dataclass -class VerificationResult: - """Result of a model verification check.""" +class ImageRef: + """Parsed container image reference.""" + + registry: str # e.g. "registry-1.docker.io", "ghcr.io" + repository: str # e.g. "library/python", "pytorch/pytorch" + tag: str # e.g. "3.11-slim", "latest" + + +@dataclass +class MetadataVerificationResult: + """Result of a metadata-based deployment verification.""" miner_uid: MinerUID miner_hotkey: Hotkey - repo: str - revision: str + deployment_id: str verified: bool - match_score: float # 0.0 = no match, 1.0 = perfect match + state: str | None = None + image: str | None = None + image_tag: str | None = None + image_public: bool | None = None + uptime_seconds: float | None = None error: str | None = None - details: VerificationDetails | None = None + failure_reason: str | None = None + +def parse_image_ref(image: str, image_tag: str | None = None) -> ImageRef: + """Parse a Docker image reference into registry, repository, and tag. -class PolicyVerifier: + Handles: + - Docker Hub shorthand: ``python:3.11-slim`` → registry-1.docker.io/library/python:3.11-slim + - Docker Hub org: ``pytorch/pytorch:2.1.0`` → registry-1.docker.io/pytorch/pytorch:2.1.0 + - Fully qualified: ``ghcr.io/org/image:v1`` → ghcr.io/org/image:v1 """ - Verifies that miner deployments match their HuggingFace uploads. + # If image_tag provided separately, strip any tag already on the image name + if image_tag: + name = image.split(":")[0] + tag = image_tag + elif ":" in image: + name, tag = image.rsplit(":", 1) + else: + name = image + tag = "latest" + + # Determine if the first component is a registry host. + # Registry hosts contain a dot or a colon (port), or are "localhost". + parts = name.split("/", 1) + if len(parts) == 1: + # Simple name like "python" → Docker Hub library image + return ImageRef( + registry=DOCKER_HUB_REGISTRY, + repository=f"library/{name}", + tag=tag, + ) + + first = parts[0] + has_dot = "." in first + has_colon = ":" in first + is_localhost = first == "localhost" + + if has_dot or has_colon or is_localhost: + # Fully qualified: ghcr.io/org/image or localhost:5000/img + return ImageRef(registry=first, repository=parts[1], tag=tag) + + # No registry prefix → Docker Hub with org, e.g. "pytorch/pytorch" + return ImageRef(registry=DOCKER_HUB_REGISTRY, repository=name, tag=tag) + + +class MetadataVerifier: + """Verifies miner Basilica deployments using the public metadata API. - Uses spot-checking: randomly selects a percentage of evaluations - to verify, comparing local inference against remote endpoint. + Checks: + 1. Deployment metadata is accessible (miner enrolled for public metadata) + 2. Deployment is in a healthy state (Active/Running) + 3. Docker image is publicly pullable from its container registry """ - # Default max repo size: 5GB - DEFAULT_MAX_REPO_SIZE_GB = 5.0 - - def __init__( - self, - verification_rate: float = 0.05, # 5% of evaluations - tolerance: float = 1e-3, # Relative tolerance for floating point comparison - num_samples: int = 5, # Number of observations to compare - cache_dir: str | None = None, - max_repo_size_gb: float = DEFAULT_MAX_REPO_SIZE_GB, - ): - """ - Initialize the policy verifier. - - Args: - verification_rate: Probability of verifying each miner (0.0 to 1.0) - tolerance: Relative tolerance for comparing actions - num_samples: Number of test observations per verification - cache_dir: Directory to cache downloaded models - max_repo_size_gb: Maximum allowed HuggingFace repo size in GB - - Raises: - ValueError: If any parameter is invalid - """ - if not 0.0 <= verification_rate <= 1.0: - raise ValueError("verification_rate must be between 0.0 and 1.0") - if tolerance < 0: - raise ValueError("tolerance must be >= 0") - if num_samples <= 0: - raise ValueError("num_samples must be >= 1") - if max_repo_size_gb <= 0: - raise ValueError("max_repo_size_gb must be > 0") - - self.verification_rate = verification_rate - self.tolerance = tolerance - self.num_samples = num_samples - self.cache_dir = cache_dir or tempfile.mkdtemp(prefix="kinitro_verify_") - self.max_repo_size_bytes = int(max_repo_size_gb * 1024 * 1024 * 1024) - # Any: cached policies are user-provided objects with no shared base class - self._policy_cache: dict[str, Any] = {} - - def should_verify(self) -> bool: - """Randomly decide whether to verify based on verification_rate.""" - return random.random() < self.verification_rate - - async def verify_miner( - self, - miner_uid: MinerUID, - miner_hotkey: Hotkey, - repo: str, - revision: str, - endpoint: str, - ) -> VerificationResult: - """ - Verify a miner's deployment matches their HuggingFace model. - - Args: - miner_uid: Miner's UID - miner_hotkey: Miner's hotkey - repo: HuggingFace repo (e.g., "user/model") - revision: HuggingFace commit SHA - endpoint: Miner's Basilica endpoint URL - - Returns: - VerificationResult with match status - """ + def __init__(self) -> None: + # No API key needed for public metadata reads + self._client = BasilicaClient() + + async def verify_miner(self, commitment: MinerCommitment) -> MetadataVerificationResult: + """Verify a single miner's deployment via metadata API.""" logger.info( - "verification_starting", - miner_uid=miner_uid, - repo=repo, - revision=revision[:12], + "metadata_verification_starting", + miner_uid=commitment.uid, + deployment_id=commitment.deployment_id, ) try: - # Load policy from HuggingFace - policy = await self._load_policy_from_hf(repo, revision) - - # Generate deterministic test observations - # Use hashlib for cross-process determinism (hash() is randomized by PYTHONHASHSEED) - seed_str = f"{miner_uid}:{revision}".encode() - test_seed = int(hashlib.sha256(seed_str).hexdigest()[:8], 16) % (2**31) - rng = np.random.default_rng(test_seed) - - # Generate Observation objects for testing - test_observations = [] - for _ in range(self.num_samples): - obs = Observation( - proprio={ - ProprioKeys.EE_POS: rng.uniform(-1, 1, size=3).tolist(), - ProprioKeys.EE_QUAT: [0.0, 0.0, 0.0, 1.0], # Identity quaternion - ProprioKeys.EE_VEL_LIN: rng.uniform(-0.5, 0.5, size=3).tolist(), - ProprioKeys.EE_VEL_ANG: rng.uniform(-0.5, 0.5, size=3).tolist(), - ProprioKeys.GRIPPER: [float(rng.uniform(0, 1))], - }, - rgb={}, # No images for verification (simpler comparison) - ) - test_observations.append(obs) - - # Get actions from local policy - local_actions = [] - for i, obs in enumerate(test_observations): - seed = test_seed + i - self._set_seed(seed) - action = await self._get_local_action(policy, obs, seed) - local_actions.append(action) - - # Get actions from remote endpoint - remote_actions = [] - for i, obs in enumerate(test_observations): - seed = test_seed + i - action = await self._get_remote_action(endpoint, obs, seed) - remote_actions.append(action) - - # Compare actions - match_scores = [] - for local, remote in zip(local_actions, remote_actions): - if local is None or remote is None: - match_scores.append(0.0) - else: - match_scores.append(self._compare_actions(local, remote)) - - avg_match = np.mean(match_scores) - verified = avg_match >= (1.0 - self.tolerance) - - logger.info( - "verification_complete", - miner_uid=miner_uid, - verified=verified, - match_score=round(avg_match, 4), - num_samples=self.num_samples, - ) - - return VerificationResult( - miner_uid=miner_uid, - miner_hotkey=miner_hotkey, - repo=repo, - revision=revision, - verified=verified, - match_score=avg_match, - details={ - "match_scores": match_scores, - "test_seed": test_seed, - "num_samples": self.num_samples, - }, + metadata = await asyncio.to_thread( + self._client.get_public_deployment_metadata, + commitment.deployment_id, ) - except Exception as e: - logger.error( - "verification_failed", - miner_uid=miner_uid, + logger.warning( + "metadata_api_error", + miner_uid=commitment.uid, + deployment_id=commitment.deployment_id, error=str(e), ) - return VerificationResult( - miner_uid=miner_uid, - miner_hotkey=miner_hotkey, - repo=repo, - revision=revision, + return MetadataVerificationResult( + miner_uid=commitment.uid, + miner_hotkey=commitment.hotkey, + deployment_id=commitment.deployment_id, verified=False, - match_score=0.0, error=str(e), + failure_reason="Metadata API call failed (deployment may not have public metadata enrolled)", ) - async def _load_policy_from_hf(self, repo: str, revision: str) -> Any: - """ - Load a policy from HuggingFace. - - Downloads the model files and imports the policy class. - Checks repo size before downloading to prevent DoS attacks. - """ - cache_key = f"{repo}:{revision}" - if cache_key in self._policy_cache: - return self._policy_cache[cache_key] + state = metadata.state + image = metadata.image + image_tag = metadata.image_tag - # Check repo size before downloading - api = HfApi() - try: - repo_info = await asyncio.to_thread( - api.repo_info, - repo_id=repo, - revision=revision, - repo_type="model", - ) - - # Calculate total size from siblings (files in repo) - total_size = 0 - if repo_info.siblings: - for sibling in repo_info.siblings: - if sibling.size is not None: - total_size += sibling.size - - if total_size > self.max_repo_size_bytes: - size_gb = total_size / (1024 * 1024 * 1024) - max_gb = self.max_repo_size_bytes / (1024 * 1024 * 1024) - raise ValueError( - f"Repository size ({size_gb:.2f}GB) exceeds maximum allowed ({max_gb:.2f}GB)" - ) - - logger.info( - "repo_size_checked", - repo=repo, - revision=revision[:12], - size_mb=round(total_size / (1024 * 1024), 2), + # Check deployment state + if state not in HEALTHY_STATES: + return MetadataVerificationResult( + miner_uid=commitment.uid, + miner_hotkey=commitment.hotkey, + deployment_id=commitment.deployment_id, + verified=False, + state=state, + image=image, + image_tag=image_tag, + uptime_seconds=metadata.uptime_seconds, + failure_reason=f"Deployment state '{state}' is not healthy", ) - except Exception as e: - if "exceeds maximum" in str(e): - raise - # Fail closed: don't download if we can't verify size (security requirement) - logger.error( - "repo_size_check_failed", - repo=repo, - error=str(e), + # Check image is publicly pullable + if not image: + return MetadataVerificationResult( + miner_uid=commitment.uid, + miner_hotkey=commitment.hotkey, + deployment_id=commitment.deployment_id, + verified=False, + state=state, + image_public=False, + uptime_seconds=metadata.uptime_seconds, + failure_reason="No image reported in deployment metadata", ) - raise ValueError( - f"Cannot verify repository size for {repo}: {e}. " - "Size check is required for security." - ) from e - - # Download from HuggingFace - model_path = await asyncio.to_thread( - snapshot_download, - repo, - revision=revision, - cache_dir=self.cache_dir, - local_dir=os.path.join(self.cache_dir, repo.replace("/", "_"), revision[:12]), - ) - - if isinstance(model_path, list): - raise ValueError("Unexpected model path format from HuggingFace download") - model_path_str = str(model_path) - - # Load the policy module - policy_file = os.path.join(model_path_str, "policy.py") - if not os.path.exists(policy_file): - raise FileNotFoundError(f"policy.py not found in {repo}@{revision}") - - # Import the policy module dynamically - spec = importlib.util.spec_from_file_location("miner_policy", policy_file) - if spec is None or spec.loader is None: - raise ImportError(f"Unable to load policy module from {policy_file}") - module = importlib.util.module_from_spec(spec) - # Add model path to sys.path for relative imports - sys.path.insert(0, model_path_str) - try: - spec.loader.exec_module(module) - finally: - sys.path.remove(model_path_str) - - # Instantiate the policy - if not hasattr(module, "RobotPolicy"): - raise AttributeError(f"RobotPolicy class not found in {repo}@{revision}") + image_public = await _check_image_public(image, image_tag) - policy = module.RobotPolicy() - self._policy_cache[cache_key] = policy + if not image_public: + full_ref = f"{image}:{image_tag}" if image_tag else image + return MetadataVerificationResult( + miner_uid=commitment.uid, + miner_hotkey=commitment.hotkey, + deployment_id=commitment.deployment_id, + verified=False, + state=state, + image=image, + image_tag=image_tag, + image_public=False, + uptime_seconds=metadata.uptime_seconds, + failure_reason=f"Image '{full_ref}' is not publicly pullable", + ) - logger.info( - "policy_loaded_from_hf", - repo=repo, - revision=revision[:12], - model_path=model_path_str, + # All checks passed + return MetadataVerificationResult( + miner_uid=commitment.uid, + miner_hotkey=commitment.hotkey, + deployment_id=commitment.deployment_id, + verified=True, + state=state, + image=image, + image_tag=image_tag, + image_public=True, + uptime_seconds=metadata.uptime_seconds, ) - return policy + async def verify_miners( + self, commitments: list[MinerCommitment] + ) -> list[MetadataVerificationResult]: + """Verify all miners concurrently.""" + tasks = [self.verify_miner(c) for c in commitments] + return await asyncio.gather(*tasks) - def _set_seed(self, seed: int) -> None: - """Set random seeds for reproducibility.""" - random.seed(seed) - np.random.seed(seed) - async def _get_local_action( - self, - policy: Any, - obs: Observation, - seed: int, # Any: user-provided policy, no common Protocol - ) -> np.ndarray | None: - """Get action from local policy.""" - try: - self._set_seed(seed) - - # Try async first, fall back to sync - if asyncio.iscoroutinefunction(policy.act): - action = await policy.act(obs) - else: - action = policy.act(obs) - - # Handle different action return types - # Note: Check by method/attribute rather than isinstance() because - # the miner's bundled rl_interface.py has its own Action class - if hasattr(action, "to_array"): - return action.to_array() - if hasattr(action, "continuous_array"): - return action.continuous_array() - if isinstance(action, dict): - return Action.model_validate(action).continuous_array() - if hasattr(action, "numpy"): - action = action.numpy() - return np.array(action, dtype=np.float32) - except Exception as e: - logger.warning("local_inference_failed", error=str(e)) - return None - - async def _get_remote_action( - self, endpoint: str, obs: Observation, seed: int - ) -> np.ndarray | None: - """Get action from remote miner endpoint.""" - try: - url = f"{endpoint.rstrip('/')}/act" - # Use the same format as the evaluator: {"obs": Observation} - payload = {"obs": obs.model_dump(mode="python")} - - async with httpx.AsyncClient(timeout=10.0) as client: - response = await client.post(url, json=payload) - response.raise_for_status() - data = response.json() - action_data = data.get("action", {}) - if isinstance(action_data, dict): - return Action.model_validate(action_data).continuous_array() - return np.array(action_data, dtype=np.float32) +async def _check_image_public(image: str, image_tag: str | None = None) -> bool: + """Check whether a container image is publicly pullable. - except Exception as e: - logger.warning("remote_inference_failed", error=str(e)) - return None - - def _compare_actions(self, local: np.ndarray, remote: np.ndarray) -> float: - """ - Compare two action vectors. - - Returns a match score from 0.0 (no match) to 1.0 (perfect match). - """ - if local.shape != remote.shape: - return 0.0 - - # Use relative comparison for floating point - if np.allclose(local, remote, rtol=self.tolerance, atol=1e-6): - return 1.0 - - # Calculate a continuous match score based on relative error - rel_error = np.abs(local - remote) / (np.abs(local) + 1e-8) - mean_rel_error = np.mean(rel_error) - - # Convert to match score (exponential decay) - match_score = np.exp(-mean_rel_error / self.tolerance) - return float(match_score) - - def compute_model_hash(self, model_path: str) -> str: - """ - Compute a deterministic hash of model weights. - - This can be used for plagiarism detection - models with the same - hash are copies of each other. - """ - hasher = hashlib.sha256() - - # Find all weight files - weight_extensions = [".pt", ".pth", ".safetensors", ".bin", ".ckpt"] - weight_files = [] - - for ext in weight_extensions: - weight_files.extend(Path(model_path).rglob(f"*{ext}")) - - # Sort for deterministic ordering - weight_files = sorted(weight_files) - - for weight_file in weight_files: - with open(weight_file, "rb") as f: - # Read in chunks to handle large files - for chunk in iter(lambda: f.read(8192), b""): - hasher.update(chunk) - - return hasher.hexdigest() - - def cleanup(self) -> None: - """Clean up cached models.""" - self._policy_cache.clear() - if os.path.exists(self.cache_dir) and self.cache_dir.startswith(tempfile.gettempdir()): - shutil.rmtree(self.cache_dir, ignore_errors=True) + Queries the Docker Registry HTTP V2 API to verify the manifest exists + and is accessible without credentials. + """ + ref = parse_image_ref(image, image_tag) + + try: + async with httpx.AsyncClient(timeout=10.0, follow_redirects=True) as client: + if ref.registry == DOCKER_HUB_REGISTRY: + return await _check_docker_hub(client, ref) + return await _check_generic_registry(client, ref) + except Exception as e: + logger.debug("image_public_check_error", image=image, error=str(e)) + return False + + +async def _check_docker_hub(client: httpx.AsyncClient, ref: ImageRef) -> bool: + """Check image pullability on Docker Hub (requires anonymous token).""" + # Get anonymous bearer token + token_resp = await client.get( + DOCKER_HUB_AUTH_URL, + params={ + "service": "registry.docker.io", + "scope": f"repository:{ref.repository}:pull", + }, + ) + if token_resp.status_code != 200: + return False + + token = token_resp.json().get("token") + if not token: + return False + + # Check manifest + manifest_url = f"https://{ref.registry}/v2/{ref.repository}/manifests/{ref.tag}" + resp = await client.head( + manifest_url, + headers={ + "Authorization": f"Bearer {token}", + "Accept": _MANIFEST_ACCEPT, + }, + ) + return resp.status_code == 200 + + +async def _check_generic_registry(client: httpx.AsyncClient, ref: ImageRef) -> bool: + """Check image pullability on a generic OCI registry.""" + manifest_url = f"https://{ref.registry}/v2/{ref.repository}/manifests/{ref.tag}" + + # Try unauthenticated first + resp = await client.head( + manifest_url, + headers={"Accept": _MANIFEST_ACCEPT}, + ) + + if resp.status_code == 200: + return True + + # If 401 with Www-Authenticate, try anonymous token exchange + if resp.status_code == 401: + return await _try_anonymous_token(client, resp, ref) + + return False + + +async def _try_anonymous_token( + client: httpx.AsyncClient, + unauthorized_resp: httpx.Response, + ref: ImageRef, +) -> bool: + """Attempt anonymous token exchange from a 401 Www-Authenticate header.""" + www_auth = unauthorized_resp.headers.get("www-authenticate", "") + if not www_auth: + return False + + # Parse Bearer realm="...",service="...",scope="..." + realm_match = re.search(r'realm="([^"]+)"', www_auth) + service_match = re.search(r'service="([^"]+)"', www_auth) + + if not realm_match: + return False + + realm = realm_match.group(1) + params: dict[str, str] = {} + if service_match: + params["service"] = service_match.group(1) + params["scope"] = f"repository:{ref.repository}:pull" + + token_resp = await client.get(realm, params=params) + if token_resp.status_code != 200: + return False + + token = token_resp.json().get("token") or token_resp.json().get("access_token") + if not token: + return False + + manifest_url = f"https://{ref.registry}/v2/{ref.repository}/manifests/{ref.tag}" + resp = await client.head( + manifest_url, + headers={ + "Authorization": f"Bearer {token}", + "Accept": _MANIFEST_ACCEPT, + }, + ) + return resp.status_code == 200 diff --git a/kinitro/executor/worker.py b/kinitro/executor/worker.py index fde03aa..f992d98 100644 --- a/kinitro/executor/worker.py +++ b/kinitro/executor/worker.py @@ -12,7 +12,6 @@ load_and_warmup_env, run_evaluation, ) -from kinitro.executor.verification import PolicyVerifier, VerificationResult from kinitro.types import AffinetesEnv, env_family_from_id logger = structlog.get_logger() @@ -24,8 +23,6 @@ class Worker: The worker loads an affinetes-managed evaluation environment and uses it to run evaluations against miner policy endpoints. - It also performs spot-check verification to ensure deployed models - match what miners uploaded to HuggingFace. """ def __init__(self, config: ExecutorConfig): @@ -34,28 +31,6 @@ def __init__(self, config: ExecutorConfig): self._envs: dict[str, AffinetesEnv] = {} self._env_lock = asyncio.Lock() - # Initialize verifier if enabled - self._verifier: PolicyVerifier | None = None - if config.verification_enabled: - self._verifier = PolicyVerifier( - verification_rate=config.verification_rate, - tolerance=config.verification_tolerance, - num_samples=config.verification_samples, - cache_dir=config.verification_cache_dir, - max_repo_size_gb=config.verification_max_repo_size_gb, - ) - logger.info( - "verification_enabled", - rate=config.verification_rate, - tolerance=config.verification_tolerance, - samples=config.verification_samples, - max_repo_size_gb=config.verification_max_repo_size_gb, - ) - - # Track verification results for reporting - self._verification_results: list[VerificationResult] = [] - self._verified_miners: set[str] = set() # Track which miners we've verified this cycle - def _get_family(self, env_id: str) -> str: """Extract family from env_id (e.g., 'metaworld' from 'metaworld/pick-place-v3').""" return env_family_from_id(env_id) @@ -128,9 +103,6 @@ async def execute_task(self, task: Task) -> TaskResult: seed=task.seed, ) - # Perform spot-check verification if enabled and not yet verified this miner - await self._maybe_verify_miner(task) - try: env = await self._get_eval_environment(task.env_id) @@ -170,78 +142,6 @@ async def execute_task(self, task: Task) -> TaskResult: error=str(e), ) - async def _maybe_verify_miner(self, task: Task) -> None: - """ - Perform spot-check verification if conditions are met. - - Verification is performed if: - - Verification is enabled - - Miner has repo and revision info - - Miner hasn't been verified yet this cycle - - Random chance based on verification_rate - """ - if self._verifier is None: - return - - # Skip if missing repo/revision info - if not task.miner_repo or not task.miner_revision: - logger.debug( - "verification_skipped_no_repo", - miner_uid=task.miner_uid, - ) - return - - # Only verify each miner once per cycle - miner_key = f"{task.miner_uid}:{task.miner_revision}" - if miner_key in self._verified_miners: - return - - # Random spot-check - if not self._verifier.should_verify(): - return - - # Mark as verified (even if verification fails, don't retry) - self._verified_miners.add(miner_key) - - logger.info( - "verification_triggered", - miner_uid=task.miner_uid, - repo=task.miner_repo, - revision=task.miner_revision[:12] if task.miner_revision else None, - ) - - try: - result = await self._verifier.verify_miner( - miner_uid=task.miner_uid, - miner_hotkey=task.miner_hotkey, - repo=task.miner_repo, - revision=task.miner_revision, - endpoint=task.miner_endpoint, - ) - - self._verification_results.append(result) - - if not result.verified: - logger.warning( - "verification_mismatch", - miner_uid=task.miner_uid, - match_score=result.match_score, - error=result.error, - ) - else: - logger.info( - "verification_passed", - miner_uid=task.miner_uid, - match_score=result.match_score, - ) - - except Exception as e: - logger.error( - "verification_error", - miner_uid=task.miner_uid, - error=str(e), - ) - async def execute_batch(self, tasks: list[Task]) -> list[TaskResult]: """ Execute a batch of tasks. @@ -258,21 +158,8 @@ async def execute_batch(self, tasks: list[Task]) -> list[TaskResult]: results.append(result) return results - def get_verification_results(self) -> list[VerificationResult]: - """Get all verification results from this worker.""" - return self._verification_results.copy() - - def get_failed_verifications(self) -> list[VerificationResult]: - """Get verification results where miner failed verification.""" - return [r for r in self._verification_results if not r.verified] - - def reset_verification_state(self) -> None: - """Reset verification state for a new evaluation cycle.""" - self._verified_miners.clear() - self._verification_results.clear() - async def cleanup(self) -> None: - """Cleanup all eval environments and verifier.""" + """Cleanup all eval environments.""" async with self._env_lock: for family, env in list(self._envs.items()): try: @@ -282,10 +169,6 @@ async def cleanup(self) -> None: logger.warning("cleanup_error", family=family, error=str(e)) self._envs.clear() - # Cleanup verifier - if self._verifier is not None: - self._verifier.cleanup() - def force_cleanup(self) -> None: """Force cleanup by killing docker containers directly.""" # Clean up all family-specific containers diff --git a/kinitro/miner/template/Dockerfile b/kinitro/miner/template/Dockerfile index 1449bb9..82b255f 100644 --- a/kinitro/miner/template/Dockerfile +++ b/kinitro/miner/template/Dockerfile @@ -6,7 +6,7 @@ # Test: docker run -p 8000:8000 your-username/kinitro-policy:v1 # Push: docker push your-username/kinitro-policy:v1 # -# Deploy to Basilica: kinitro miner push --repo user/policy --revision abc123 +# Deploy to Basilica: kinitro miner push --image user/policy:v1 --name my-policy FROM python:3.11-slim diff --git a/kinitro/miner/template/policy.py b/kinitro/miner/template/policy.py index 82e005f..1add6a6 100644 --- a/kinitro/miner/template/policy.py +++ b/kinitro/miner/template/policy.py @@ -62,7 +62,7 @@ async def act(self, observation: Observation): # return action # # NOTE: If seed is provided, ensure your inference is deterministic. - # The validator may verify that your deployed model matches HuggingFace. + # The validator verifies your deployment via Basilica metadata. # Default: random action (seed is already set by server if provided) twist = np.random.uniform(-1, 1, size=6).tolist() diff --git a/kinitro/miner/template/requirements.txt b/kinitro/miner/template/requirements.txt index a94501d..d820fe6 100644 --- a/kinitro/miner/template/requirements.txt +++ b/kinitro/miner/template/requirements.txt @@ -18,9 +18,5 @@ torch>=2.0.0 # Image processing Pillow>=10.0.0 -# HuggingFace (for model hosting) -huggingface_hub>=0.20.0 -# transformers>=4.30.0 - # Gymnasium (for environment compatibility) gymnasium>=0.29.0 diff --git a/kinitro/miner/template/server.py b/kinitro/miner/template/server.py index b7e4d72..4fa870e 100644 --- a/kinitro/miner/template/server.py +++ b/kinitro/miner/template/server.py @@ -7,15 +7,17 @@ DEPLOYMENT OPTIONS: 1. Basilica Platform (Recommended): - - Use kinitro CLI: kinitro miner push --repo YOUR_HF_REPO --revision YOUR_REVISION - - Or use one-command deploy: kinitro miner deploy -r YOUR_HF_REPO -p . --netuid YOUR_NETUID + - Build image: docker build -t user/policy:v1 . + - Push to registry: docker push user/policy:v1 + - Deploy: kinitro miner push --image user/policy:v1 --name my-policy + - Or one-command: kinitro miner deploy --image user/policy:v1 --netuid YOUR_NETUID 2. Self-Hosted: - Run this server directly with uvicorn - Ensure your endpoint is publicly accessible After deployment, commit your policy on-chain: - kinitro miner commit --endpoint YOUR_ENDPOINT_URL --netuid YOUR_NETUID + kinitro miner commit --deployment-id YOUR_DEPLOYMENT_ID --netuid YOUR_NETUID Endpoints: POST /reset - Reset policy for new episode diff --git a/kinitro/scheduler/config.py b/kinitro/scheduler/config.py index 01db89a..07e68c3 100644 --- a/kinitro/scheduler/config.py +++ b/kinitro/scheduler/config.py @@ -77,5 +77,12 @@ class SchedulerConfig(BaseSettings): "If None, all environments are used.", ) + # Metadata verification + metadata_verification_enabled: bool = Field( + default=True, + description="Verify miner deployments via Basilica metadata API before generating tasks. " + "Checks that deployments are running and use publicly pullable Docker images.", + ) + # Logging log_level: str = Field(default="INFO", description="Logging level") diff --git a/kinitro/scheduler/main.py b/kinitro/scheduler/main.py index 92c253e..8587e38 100644 --- a/kinitro/scheduler/main.py +++ b/kinitro/scheduler/main.py @@ -12,6 +12,7 @@ from kinitro.chain.commitments import read_miner_commitments from kinitro.crypto import BackendKeypair from kinitro.environments import get_all_environment_ids, get_environments_by_family +from kinitro.executor.verification import MetadataVerifier from kinitro.scheduler.config import SchedulerConfig from kinitro.scheduler.scoring import ( aggregate_task_results, @@ -206,6 +207,51 @@ async def _run_evaluation_cycle(self) -> None: logger.info("found_miners", count=len(miners)) + # 1.5. Verify miner deployments via Basilica metadata API + if self.config.metadata_verification_enabled: + verifier = MetadataVerifier() + verification_results = await verifier.verify_miners(miners) + + verified_uids: set[int] = set() + for result in verification_results: + if result.verified: + verified_uids.add(result.miner_uid) + logger.info( + "miner_verified", + miner_uid=result.miner_uid, + image=result.image, + image_tag=result.image_tag, + state=result.state, + ) + else: + logger.warning( + "miner_verification_failed", + miner_uid=result.miner_uid, + deployment_id=result.deployment_id, + failure_reason=result.failure_reason, + image=result.image, + state=result.state, + error=result.error, + ) + + original_count = len(miners) + miners = [m for m in miners if m.uid in verified_uids] + + logger.info( + "metadata_verification_complete", + total_miners=original_count, + verified_miners=len(miners), + failed_miners=original_count - len(miners), + ) + + if not miners: + logger.warning("no_miners_passed_verification") + async with self.storage.session() as session: + await self.storage.fail_cycle( + session, cycle_id, "No miners passed metadata verification" + ) + return + # 2. Generate and create tasks tasks_data = generate_tasks( miners=miners, diff --git a/kinitro/scheduler/task_generator.py b/kinitro/scheduler/task_generator.py index abec4b4..0fea926 100644 --- a/kinitro/scheduler/task_generator.py +++ b/kinitro/scheduler/task_generator.py @@ -99,8 +99,6 @@ def generate_tasks( "miner_uid": miner.uid, "miner_hotkey": miner.hotkey, "miner_endpoint": endpoint, - "miner_repo": miner.huggingface_repo, - "miner_revision": miner.revision_sha, "env_id": env_id, "seed": seed, } diff --git a/kinitro/types.py b/kinitro/types.py index 3c6e5b9..b50bf68 100644 --- a/kinitro/types.py +++ b/kinitro/types.py @@ -69,11 +69,8 @@ class EncodedImage(TypedDict): class ParsedCommitment(TypedDict): """Parsed commitment fields from parse_commitment().""" - huggingface_repo: str - revision_sha: str deployment_id: str encrypted_deployment: str | None - docker_image: str class TaskResultData(TypedDict): @@ -108,16 +105,6 @@ class TaskCreateData(TypedDict): env_id: EnvironmentId seed: Seed task_uuid: NotRequired[TaskUUID] - miner_repo: NotRequired[str | None] - miner_revision: NotRequired[str | None] - - -class VerificationDetails(TypedDict): - """Details dict for VerificationResult.details.""" - - match_scores: list[float] - test_seed: int - num_samples: int class StepInfo(TypedDict, total=False): diff --git a/pyproject.toml b/pyproject.toml index dc1cff0..bc4812c 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -37,9 +37,7 @@ dependencies = [ "asyncpg>=0.30.0", "alembic>=1.14.0", # Deployment (Basilica for GPU serverless) - "basilica-sdk==0.15.0", - # HuggingFace Hub for model uploads - "huggingface-hub>=0.20.0", + "basilica-sdk==0.19.0", ] [project.optional-dependencies] diff --git a/tests/integration/test_metadata_e2e.py b/tests/integration/test_metadata_e2e.py new file mode 100644 index 0000000..8bc3f6b --- /dev/null +++ b/tests/integration/test_metadata_e2e.py @@ -0,0 +1,322 @@ +"""Integration tests for the metadata verification E2E flow. + +Exercises: push --image, verify --deployment-id, verify --netuid --uid, +and the full push → commit → verify pipeline — all with mocked externals +(Basilica API, Docker registry, chain). +""" + +from __future__ import annotations + +from dataclasses import dataclass +from unittest.mock import AsyncMock, MagicMock, patch + +from typer.testing import CliRunner + +from kinitro.chain.commitments import MinerCommitment +from kinitro.cli.miner import miner_app +from kinitro.types import BlockNumber, Hotkey, MinerUID + +runner = CliRunner() + + +# --------------------------------------------------------------------------- +# Fakes / helpers +# --------------------------------------------------------------------------- + + +@dataclass +class _FakeDeployment: + name: str = "test-deploy" + url: str = "https://test-deploy.deployments.basilica.ai" + state: str = "Running" + + +@dataclass +class _FakeMetadata: + id: str = "dep-id-1" + instance_name: str = "test-deploy" + image: str = "python" + image_tag: str = "3.11-slim" + replicas: int = 1 + state: str = "Running" + uptime_seconds: float = 3600.0 + + +def _make_commitment( + uid: int = 0, + hotkey: str = "5FHneW46xGXgs5mUiveU4sbTyGBzmstUspZC92UhjJM694ty", + deployment_id: str = "test-deploy", +) -> MinerCommitment: + return MinerCommitment( + uid=MinerUID(uid), + hotkey=Hotkey(hotkey), + deployment_id=deployment_id, + committed_block=BlockNumber(1000), + ) + + +# --------------------------------------------------------------------------- +# push --image +# --------------------------------------------------------------------------- + + +class TestPushImage: + def test_push_image_deploys_to_basilica(self): + """basilica_push with --image deploys without source, enrolls metadata.""" + mock_client = MagicMock() + mock_client.deploy.return_value = _FakeDeployment() + mock_client.enroll_metadata.return_value = None + + with patch("kinitro.cli.miner.deploy.BasilicaClient", return_value=mock_client): + result = runner.invoke( + miner_app, + [ + "push", + "--image", + "python:3.11-slim", + "--name", + "test-deploy", + "--api-token", + "fake-token", + ], + ) + + assert result.exit_code == 0, result.output + assert "DEPLOYMENT SUCCESSFUL" in result.output + + # Verify deploy was called without source or pip_packages + deploy_call = mock_client.deploy.call_args + assert deploy_call.kwargs.get("source") is None or "source" not in deploy_call.kwargs + assert ( + deploy_call.kwargs.get("pip_packages") is None + or "pip_packages" not in deploy_call.kwargs + ) + assert deploy_call.kwargs["image"] == "python:3.11-slim" + assert deploy_call.kwargs["name"] == "test-deploy" + + # Metadata enrolled + mock_client.enroll_metadata.assert_called_once_with("test-deploy", enabled=True) + + def test_push_requires_image_and_name(self): + """push without --image and --name should fail.""" + result = runner.invoke( + miner_app, + ["push", "--api-token", "fake-token"], + ) + assert result.exit_code != 0 + + +# --------------------------------------------------------------------------- +# verify --deployment-id +# --------------------------------------------------------------------------- + + +class TestVerifyDeployment: + def test_verify_deployment_happy_path(self): + """verify --deployment-id returns verified for healthy deployment.""" + mock_client = MagicMock() + mock_client.get_public_deployment_metadata.return_value = _FakeMetadata() + + with ( + patch("kinitro.executor.verification.BasilicaClient", return_value=mock_client), + patch( + "kinitro.executor.verification._check_image_public", + new_callable=AsyncMock, + return_value=True, + ), + ): + result = runner.invoke( + miner_app, + ["verify", "--deployment-id", "test-deploy"], + ) + + assert result.exit_code == 0, result.output + assert "VERIFIED" in result.output + assert "Running" in result.output + + def test_verify_deployment_unhealthy(self): + """verify --deployment-id returns failed for stopped deployment.""" + mock_client = MagicMock() + mock_client.get_public_deployment_metadata.return_value = _FakeMetadata(state="Stopped") + + with patch("kinitro.executor.verification.BasilicaClient", return_value=mock_client): + result = runner.invoke( + miner_app, + ["verify", "--deployment-id", "test-deploy"], + ) + + assert result.exit_code != 0 + assert "FAILED" in result.output + assert "not healthy" in result.output + + def test_verify_deployment_private_image(self): + """verify --deployment-id returns failed when image is not public.""" + mock_client = MagicMock() + mock_client.get_public_deployment_metadata.return_value = _FakeMetadata( + image="private.registry.io/secret", image_tag="v1" + ) + + with ( + patch("kinitro.executor.verification.BasilicaClient", return_value=mock_client), + patch( + "kinitro.executor.verification._check_image_public", + new_callable=AsyncMock, + return_value=False, + ), + ): + result = runner.invoke( + miner_app, + ["verify", "--deployment-id", "test-deploy"], + ) + + assert result.exit_code != 0 + assert "FAILED" in result.output + assert "not publicly pullable" in result.output + + +# --------------------------------------------------------------------------- +# verify --netuid --uid (chain mode) +# --------------------------------------------------------------------------- + + +class TestVerifyFromChain: + def test_verify_from_chain(self): + """verify --netuid --uid reads commitment from chain and verifies.""" + mock_client = MagicMock() + mock_client.get_public_deployment_metadata.return_value = _FakeMetadata() + + with ( + patch( + "kinitro.cli.miner.verify._get_hotkey_for_uid", + new_callable=AsyncMock, + return_value="5FHneW46xGXgs5mUiveU4sbTyGBzmstUspZC92UhjJM694ty", + ), + patch( + "kinitro.cli.miner.verify._read_commitment_from_chain", + new_callable=AsyncMock, + return_value=_make_commitment(), + ), + patch("kinitro.executor.verification.BasilicaClient", return_value=mock_client), + patch( + "kinitro.executor.verification._check_image_public", + new_callable=AsyncMock, + return_value=True, + ), + ): + result = runner.invoke( + miner_app, + ["verify", "--netuid", "1", "--uid", "5"], + ) + + assert result.exit_code == 0, result.output + assert "VERIFIED" in result.output + + def test_verify_no_commitment_on_chain(self): + """verify --netuid --uid fails when no commitment exists.""" + with ( + patch( + "kinitro.cli.miner.verify._get_hotkey_for_uid", + new_callable=AsyncMock, + return_value="5FHneW46xGXgs5mUiveU4sbTyGBzmstUspZC92UhjJM694ty", + ), + patch( + "kinitro.cli.miner.verify._read_commitment_from_chain", + new_callable=AsyncMock, + return_value=None, + ), + ): + result = runner.invoke( + miner_app, + ["verify", "--netuid", "1", "--uid", "5"], + ) + + assert result.exit_code != 0 + assert "No valid commitment" in result.output + + def test_verify_uid_not_found(self): + """verify --netuid --uid fails when UID doesn't exist.""" + with patch( + "kinitro.cli.miner.verify._get_hotkey_for_uid", + new_callable=AsyncMock, + return_value=None, + ): + result = runner.invoke( + miner_app, + ["verify", "--netuid", "1", "--uid", "999"], + ) + + assert result.exit_code != 0 + assert "not found" in result.output + + +# --------------------------------------------------------------------------- +# Full flow: push --image → commit → verify +# --------------------------------------------------------------------------- + + +class TestFullFlow: + def test_full_push_then_verify(self): + """Push image then verify the deployment (mocked externals).""" + # Step 1: Push + mock_basilica_client = MagicMock() + mock_basilica_client.deploy.return_value = _FakeDeployment() + mock_basilica_client.enroll_metadata.return_value = None + + with patch("kinitro.cli.miner.deploy.BasilicaClient", return_value=mock_basilica_client): + push_result = runner.invoke( + miner_app, + [ + "push", + "--image", + "python:3.11-slim", + "--name", + "test-deploy", + "--api-token", + "fake-token", + ], + ) + + assert push_result.exit_code == 0, push_result.output + assert "DEPLOYMENT SUCCESSFUL" in push_result.output + + # Step 2: Verify the same deployment + mock_verify_client = MagicMock() + mock_verify_client.get_public_deployment_metadata.return_value = _FakeMetadata() + + with ( + patch( + "kinitro.executor.verification.BasilicaClient", + return_value=mock_verify_client, + ), + patch( + "kinitro.executor.verification._check_image_public", + new_callable=AsyncMock, + return_value=True, + ), + ): + verify_result = runner.invoke( + miner_app, + ["verify", "--deployment-id", "test-deploy"], + ) + + assert verify_result.exit_code == 0, verify_result.output + assert "VERIFIED" in verify_result.output + + +# --------------------------------------------------------------------------- +# Argument validation +# --------------------------------------------------------------------------- + + +class TestVerifyArgValidation: + def test_verify_no_args(self): + """verify with no arguments should fail.""" + result = runner.invoke(miner_app, ["verify"]) + assert result.exit_code != 0 + assert "Provide --deployment-id" in result.output + + def test_verify_netuid_without_uid_or_hotkey(self): + """verify --netuid alone should fail.""" + result = runner.invoke(miner_app, ["verify", "--netuid", "1"]) + assert result.exit_code != 0 + assert "Provide --deployment-id" in result.output diff --git a/tests/unit/test_crypto.py b/tests/unit/test_crypto.py index dcedf51..ac08e46 100644 --- a/tests/unit/test_crypto.py +++ b/tests/unit/test_crypto.py @@ -211,8 +211,8 @@ def test_full_commitment_flow(self, keypair: BackendKeypair) -> None: deployment_id = "95edf2b6-e18b-400a-8398-5573df10e5e4" encrypted_blob = encrypt_deployment_id(deployment_id, keypair.public_key_hex()) - # Miner creates commitment (colon-separated format) - commitment = f"user/policy:abc123def456:e:{encrypted_blob}" + # Miner creates commitment (new format) + commitment = f"e:{encrypted_blob}" # Verify commitment is under chain limit (128 bytes) assert len(commitment) <= 128 @@ -220,8 +220,6 @@ def test_full_commitment_flow(self, keypair: BackendKeypair) -> None: # Backend parses commitment from chain parsed = parse_commitment(commitment) - assert parsed["huggingface_repo"] == "user/policy" - assert parsed["revision_sha"] == "abc123def456" assert parsed["deployment_id"] == "" # Empty until decrypted assert parsed["encrypted_deployment"] == encrypted_blob @@ -233,11 +231,35 @@ def test_full_commitment_flow(self, keypair: BackendKeypair) -> None: def test_plain_commitment_still_works(self) -> None: """Plain commitments should still work.""" + commitment = "95edf2b6-e18b-400a-8398-5573df10e5e4" + + parsed = parse_commitment(commitment) + + assert parsed["deployment_id"] == "95edf2b6-e18b-400a-8398-5573df10e5e4" + assert parsed["encrypted_deployment"] is None + + def test_legacy_commitment_format(self) -> None: + """Legacy repo:rev:deployment_id format should still parse.""" commitment = "user/policy:abc123:95edf2b6-e18b-400a-8398-5573df10e5e4" parsed = parse_commitment(commitment) - assert parsed["huggingface_repo"] == "user/policy" - assert parsed["revision_sha"] == "abc123" assert parsed["deployment_id"] == "95edf2b6-e18b-400a-8398-5573df10e5e4" assert parsed["encrypted_deployment"] is None + + def test_legacy_encrypted_format(self, keypair: BackendKeypair) -> None: + """Legacy repo:rev:e:blob format should still parse.""" + deployment_id = "95edf2b6-e18b-400a-8398-5573df10e5e4" + encrypted_blob = encrypt_deployment_id(deployment_id, keypair.public_key_hex()) + + commitment = f"user/policy:abc123:e:{encrypted_blob}" + + parsed = parse_commitment(commitment) + + assert parsed["deployment_id"] == "" + assert parsed["encrypted_deployment"] == encrypted_blob + + # Should decrypt correctly + assert parsed["encrypted_deployment"] is not None + decrypted = decrypt_deployment_id(parsed["encrypted_deployment"], keypair.private_key) + assert decrypted == deployment_id diff --git a/tests/unit/test_metadata_verification.py b/tests/unit/test_metadata_verification.py new file mode 100644 index 0000000..fcabaea --- /dev/null +++ b/tests/unit/test_metadata_verification.py @@ -0,0 +1,289 @@ +"""Tests for the Basilica metadata-based deployment verification.""" + +from __future__ import annotations + +from dataclasses import dataclass +from unittest.mock import AsyncMock, MagicMock, patch + +import httpx +import pytest + +from kinitro.chain.commitments import MinerCommitment +from kinitro.executor.verification import ( + DOCKER_HUB_REGISTRY, + MetadataVerifier, + _check_image_public, + parse_image_ref, +) +from kinitro.types import BlockNumber, Hotkey, MinerUID + +# --------------------------------------------------------------------------- +# Fixtures +# --------------------------------------------------------------------------- + + +def _make_commitment( + uid: int = 1, + hotkey: str = "5FHneW46xGXgs5mUiveU4sbTyGBzmstUspZC92UhjJM694ty", + deployment_id: str = "test-deployment-abc123", +) -> MinerCommitment: + return MinerCommitment( + uid=MinerUID(uid), + hotkey=Hotkey(hotkey), + deployment_id=deployment_id, + committed_block=BlockNumber(1000), + ) + + +@dataclass +class _FakeMetadata: + id: str = "dep-id-1" + instance_name: str = "test-deployment-abc123" + image: str = "python" + image_tag: str = "3.11-slim" + replicas: int = 1 + state: str = "Running" + uptime_seconds: float = 3600.0 + + +# --------------------------------------------------------------------------- +# parse_image_ref +# --------------------------------------------------------------------------- + + +class TestParseImageRef: + def test_docker_hub_library_image(self): + ref = parse_image_ref("python", "3.11-slim") + assert ref.registry == DOCKER_HUB_REGISTRY + assert ref.repository == "library/python" + assert ref.tag == "3.11-slim" + + def test_docker_hub_library_image_inline_tag(self): + ref = parse_image_ref("python:3.11-slim") + assert ref.registry == DOCKER_HUB_REGISTRY + assert ref.repository == "library/python" + assert ref.tag == "3.11-slim" + + def test_docker_hub_org_image(self): + ref = parse_image_ref("pytorch/pytorch:2.1.0-cuda12.1-cudnn8-runtime") + assert ref.registry == DOCKER_HUB_REGISTRY + assert ref.repository == "pytorch/pytorch" + assert ref.tag == "2.1.0-cuda12.1-cudnn8-runtime" + + def test_fully_qualified_ghcr(self): + ref = parse_image_ref("ghcr.io/org/image:v1") + assert ref.registry == "ghcr.io" + assert ref.repository == "org/image" + assert ref.tag == "v1" + + def test_fully_qualified_nested(self): + ref = parse_image_ref("nvcr.io/nvidia/pytorch:23.10-py3") + assert ref.registry == "nvcr.io" + assert ref.repository == "nvidia/pytorch" + assert ref.tag == "23.10-py3" + + def test_no_tag_defaults_to_latest(self): + ref = parse_image_ref("python") + assert ref.tag == "latest" + + def test_separate_tag_overrides_inline(self): + ref = parse_image_ref("python:3.10", "3.11-slim") + assert ref.tag == "3.11-slim" + + def test_localhost_registry(self): + ref = parse_image_ref("localhost:5000/myimage:v1") + assert ref.registry == "localhost:5000" + assert ref.repository == "myimage" + assert ref.tag == "v1" + + +# --------------------------------------------------------------------------- +# MetadataVerifier.verify_miner +# --------------------------------------------------------------------------- + + +class TestVerifyMiner: + @pytest.mark.asyncio + async def test_happy_path(self): + """Healthy deployment with public image passes verification.""" + verifier = MetadataVerifier.__new__(MetadataVerifier) + verifier._client = MagicMock() + verifier._client.get_public_deployment_metadata = MagicMock(return_value=_FakeMetadata()) + + with patch( + "kinitro.executor.verification._check_image_public", + new_callable=AsyncMock, + return_value=True, + ): + result = await verifier.verify_miner(_make_commitment()) + + assert result.verified is True + assert result.state == "Running" + assert result.image == "python" + assert result.image_tag == "3.11-slim" + assert result.image_public is True + + @pytest.mark.asyncio + async def test_unhealthy_state(self): + """Deployment in non-healthy state fails verification.""" + verifier = MetadataVerifier.__new__(MetadataVerifier) + verifier._client = MagicMock() + verifier._client.get_public_deployment_metadata = MagicMock( + return_value=_FakeMetadata(state="Stopped") + ) + + result = await verifier.verify_miner(_make_commitment()) + + assert result.verified is False + assert result.state == "Stopped" + assert "not healthy" in (result.failure_reason or "") + + @pytest.mark.asyncio + async def test_private_image(self): + """Non-public image fails verification.""" + verifier = MetadataVerifier.__new__(MetadataVerifier) + verifier._client = MagicMock() + verifier._client.get_public_deployment_metadata = MagicMock( + return_value=_FakeMetadata(image="private-registry.corp/secret-img", image_tag="v1") + ) + + with patch( + "kinitro.executor.verification._check_image_public", + new_callable=AsyncMock, + return_value=False, + ): + result = await verifier.verify_miner(_make_commitment()) + + assert result.verified is False + assert result.image_public is False + assert "not publicly pullable" in (result.failure_reason or "") + + @pytest.mark.asyncio + async def test_no_image_in_metadata(self): + """Missing image field fails verification.""" + verifier = MetadataVerifier.__new__(MetadataVerifier) + verifier._client = MagicMock() + verifier._client.get_public_deployment_metadata = MagicMock( + return_value=_FakeMetadata(image="") + ) + + result = await verifier.verify_miner(_make_commitment()) + + assert result.verified is False + assert "No image" in (result.failure_reason or "") + + @pytest.mark.asyncio + async def test_metadata_api_error(self): + """Exception from SDK fails verification gracefully.""" + verifier = MetadataVerifier.__new__(MetadataVerifier) + verifier._client = MagicMock() + verifier._client.get_public_deployment_metadata = MagicMock( + side_effect=RuntimeError("API unavailable") + ) + + result = await verifier.verify_miner(_make_commitment()) + + assert result.verified is False + assert result.error == "API unavailable" + assert "Metadata API call failed" in (result.failure_reason or "") + + +# --------------------------------------------------------------------------- +# MetadataVerifier.verify_miners (batch) +# --------------------------------------------------------------------------- + + +class TestVerifyMiners: + @pytest.mark.asyncio + async def test_batch_mixed_results(self): + """Batch with a mix of passing and failing miners.""" + verifier = MetadataVerifier.__new__(MetadataVerifier) + verifier._client = MagicMock() + + def _fake_metadata(instance_name): + if instance_name == "good-deploy": + return _FakeMetadata(instance_name="good-deploy", state="Running") + return _FakeMetadata(instance_name="bad-deploy", state="Failed") + + verifier._client.get_public_deployment_metadata = MagicMock(side_effect=_fake_metadata) + + commitments = [ + _make_commitment(uid=1, deployment_id="good-deploy"), + _make_commitment(uid=2, deployment_id="bad-deploy"), + ] + + with patch( + "kinitro.executor.verification._check_image_public", + new_callable=AsyncMock, + return_value=True, + ): + results = await verifier.verify_miners(commitments) + + assert len(results) == 2 + assert results[0].verified is True + assert results[1].verified is False + + +# --------------------------------------------------------------------------- +# _check_image_public (integration-style with mocked HTTP) +# --------------------------------------------------------------------------- + + +class TestCheckImagePublic: + @pytest.mark.asyncio + async def test_docker_hub_public_image(self, monkeypatch): + """Docker Hub image that is publicly accessible.""" + + async def _mock_get(self, url, **kwargs): + if "auth.docker.io" in str(url): + return httpx.Response(200, json={"token": "fake-token"}) + return httpx.Response(200) + + async def _mock_head(self, url, **kwargs): + return httpx.Response(200) + + monkeypatch.setattr(httpx.AsyncClient, "get", _mock_get) + monkeypatch.setattr(httpx.AsyncClient, "head", _mock_head) + + result = await _check_image_public("python", "3.11-slim") + assert result is True + + @pytest.mark.asyncio + async def test_docker_hub_token_failure(self, monkeypatch): + """Docker Hub token request fails.""" + + async def _mock_get(self, url, **kwargs): + return httpx.Response(403) + + monkeypatch.setattr(httpx.AsyncClient, "get", _mock_get) + + result = await _check_image_public("python", "3.11-slim") + assert result is False + + @pytest.mark.asyncio + async def test_generic_registry_public(self, monkeypatch): + """Generic registry image accessible without auth.""" + + async def _mock_head(self, url, **kwargs): + return httpx.Response(200) + + monkeypatch.setattr(httpx.AsyncClient, "head", _mock_head) + + result = await _check_image_public("ghcr.io/org/image", "v1") + assert result is True + + @pytest.mark.asyncio + async def test_network_error_returns_false(self, monkeypatch): + """Network error during registry check returns False.""" + + async def _mock_head(self, url, **kwargs): + raise httpx.ConnectError("Connection refused") + + async def _mock_get(self, url, **kwargs): + raise httpx.ConnectError("Connection refused") + + monkeypatch.setattr(httpx.AsyncClient, "head", _mock_head) + monkeypatch.setattr(httpx.AsyncClient, "get", _mock_get) + + result = await _check_image_public("python", "3.11-slim") + assert result is False diff --git a/uv.lock b/uv.lock index 5677366..00a3137 100644 --- a/uv.lock +++ b/uv.lock @@ -281,13 +281,13 @@ wheels = [ [[package]] name = "basilica-sdk" -version = "0.15.0" +version = "0.19.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/ed/da/eee2e89c423c8e0c9d7cc2611dcc4d65077999dad1ab81ddd59f0345d452/basilica_sdk-0.15.0.tar.gz", hash = "sha256:ff172ab1cec1f0758b6c717232641a0174a46b703b9ccdb104bc515bf20a7c1b", size = 674857, upload-time = "2026-01-30T20:22:35.582Z" } +sdist = { url = "https://files.pythonhosted.org/packages/f7/2e/1680b8ada641c89d697c74ad1d3c1f774459e45993a343da81ebc9a074af/basilica_sdk-0.19.0.tar.gz", hash = "sha256:c8fc4966ca126e6579dd0fe3b1629b02359d97c74b8b5eb31f5b717ff411bda1", size = 739218, upload-time = "2026-02-12T19:55:03.645Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/ff/64/cd0f65929342e65c304ccb0bee85533db7cf9fa44f769b370924d3981c42/basilica_sdk-0.15.0-cp310-abi3-macosx_10_12_x86_64.whl", hash = "sha256:8993c6624f189a77c0a76d1fa93e139c06a45ad75723aebd1ce1bb2ae5a8fed9", size = 3036858, upload-time = "2026-01-30T20:22:28.898Z" }, - { url = "https://files.pythonhosted.org/packages/d8/b6/bb076fbb45452166a25f4ee802570dac710ff9be94ccc611e710cf9ad37e/basilica_sdk-0.15.0-cp310-abi3-macosx_11_0_arm64.whl", hash = "sha256:d3ccb9d1b89644cf39ad93b714a4c9ba9d09420967410d9ad29f7bd3f33699fc", size = 2953361, upload-time = "2026-01-30T20:22:31.229Z" }, - { url = "https://files.pythonhosted.org/packages/c5/13/fdddeea097f382a1506c564970e66f8d52e92996604ce3adfc96b7f07fb4/basilica_sdk-0.15.0-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d0b15f3b103022cab3fa21497b76318688313131b0b8416740d4c886f9317f11", size = 4039016, upload-time = "2026-01-30T20:22:33.479Z" }, + { url = "https://files.pythonhosted.org/packages/80/aa/bbd0d1841a538b26fc5836ccfd97261d7fc3c52123a2968840068fef44d7/basilica_sdk-0.19.0-cp310-abi3-macosx_10_12_x86_64.whl", hash = "sha256:ea5ef8db9ce4bfa216d6aa3fd28e8c6be7370fb04fd2f97250a6231d400329a3", size = 4126039, upload-time = "2026-02-12T19:54:58.149Z" }, + { url = "https://files.pythonhosted.org/packages/57/b2/82d080dcf7316b3acbf7a5d602398440dbfe021beea29a11adb6f03a9fda/basilica_sdk-0.19.0-cp310-abi3-macosx_11_0_arm64.whl", hash = "sha256:c3c03cc37228c7b61b55bdf1bc67a35c3a8050775c743627878653e7f171de7b", size = 3346491, upload-time = "2026-02-12T19:54:59.767Z" }, + { url = "https://files.pythonhosted.org/packages/6c/1d/843a6f9a31839b5a8ce89a35fdbd2db4e560280965301495063f7e1f3ace/basilica_sdk-0.19.0-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:801ef2ad06075532c47cb302475e48dd614aa635dfb859fccba8469c90586c25", size = 5283338, upload-time = "2026-02-12T19:55:01.372Z" }, ] [[package]] @@ -812,15 +812,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/5c/05/5cbb59154b093548acd0f4c7c474a118eda06da25aa75c616b72d8fcd92a/fastapi-0.128.0-py3-none-any.whl", hash = "sha256:aebd93f9716ee3b4f4fcfe13ffb7cf308d99c9f3ab5622d8877441072561582d", size = 103094, upload-time = "2025-12-27T15:21:12.154Z" }, ] -[[package]] -name = "filelock" -version = "3.20.3" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/1d/65/ce7f1b70157833bf3cb851b556a37d4547ceafc158aa9b34b36782f23696/filelock-3.20.3.tar.gz", hash = "sha256:18c57ee915c7ec61cff0ecf7f0f869936c7c30191bb0cf406f1341778d0834e1", size = 19485, upload-time = "2026-01-09T17:55:05.421Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/b5/36/7fb70f04bf00bc646cd5bb45aa9eddb15e19437a28b8fb2b4a5249fac770/filelock-3.20.3-py3-none-any.whl", hash = "sha256:4b0dda527ee31078689fc205ec4f1c1bf7d56cf88b6dc9426c4f230e46c2dce1", size = 16701, upload-time = "2026-01-09T17:55:04.334Z" }, -] - [[package]] name = "frozenlist" version = "1.8.0" @@ -952,6 +943,7 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/f8/0a/a3871375c7b9727edaeeea994bfff7c63ff7804c9829c19309ba2e058807/greenlet-3.3.0-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:b01548f6e0b9e9784a2c99c5651e5dc89ffcbe870bc5fb2e5ef864e9cc6b5dcb", size = 276379, upload-time = "2025-12-04T14:23:30.498Z" }, { url = "https://files.pythonhosted.org/packages/43/ab/7ebfe34dce8b87be0d11dae91acbf76f7b8246bf9d6b319c741f99fa59c6/greenlet-3.3.0-cp312-cp312-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:349345b770dc88f81506c6861d22a6ccd422207829d2c854ae2af8025af303e3", size = 597294, upload-time = "2025-12-04T14:50:06.847Z" }, { url = "https://files.pythonhosted.org/packages/a4/39/f1c8da50024feecd0793dbd5e08f526809b8ab5609224a2da40aad3a7641/greenlet-3.3.0-cp312-cp312-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:e8e18ed6995e9e2c0b4ed264d2cf89260ab3ac7e13555b8032b25a74c6d18655", size = 607742, upload-time = "2025-12-04T14:57:42.349Z" }, + { url = "https://files.pythonhosted.org/packages/77/cb/43692bcd5f7a0da6ec0ec6d58ee7cddb606d055ce94a62ac9b1aa481e969/greenlet-3.3.0-cp312-cp312-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:c024b1e5696626890038e34f76140ed1daf858e37496d33f2af57f06189e70d7", size = 622297, upload-time = "2025-12-04T15:07:13.552Z" }, { url = "https://files.pythonhosted.org/packages/75/b0/6bde0b1011a60782108c01de5913c588cf51a839174538d266de15e4bf4d/greenlet-3.3.0-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:047ab3df20ede6a57c35c14bf5200fcf04039d50f908270d3f9a7a82064f543b", size = 609885, upload-time = "2025-12-04T14:26:02.368Z" }, { url = "https://files.pythonhosted.org/packages/49/0e/49b46ac39f931f59f987b7cd9f34bfec8ef81d2a1e6e00682f55be5de9f4/greenlet-3.3.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:2d9ad37fc657b1102ec880e637cccf20191581f75c64087a549e66c57e1ceb53", size = 1567424, upload-time = "2025-12-04T15:04:23.757Z" }, { url = "https://files.pythonhosted.org/packages/05/f5/49a9ac2dff7f10091935def9165c90236d8f175afb27cbed38fb1d61ab6b/greenlet-3.3.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:83cd0e36932e0e7f36a64b732a6f60c2fc2df28c351bae79fbaf4f8092fe7614", size = 1636017, upload-time = "2025-12-04T14:27:29.688Z" }, @@ -959,6 +951,7 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/02/2f/28592176381b9ab2cafa12829ba7b472d177f3acc35d8fbcf3673d966fff/greenlet-3.3.0-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:a1e41a81c7e2825822f4e068c48cb2196002362619e2d70b148f20a831c00739", size = 275140, upload-time = "2025-12-04T14:23:01.282Z" }, { url = "https://files.pythonhosted.org/packages/2c/80/fbe937bf81e9fca98c981fe499e59a3f45df2a04da0baa5c2be0dca0d329/greenlet-3.3.0-cp313-cp313-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9f515a47d02da4d30caaa85b69474cec77b7929b2e936ff7fb853d42f4bf8808", size = 599219, upload-time = "2025-12-04T14:50:08.309Z" }, { url = "https://files.pythonhosted.org/packages/c2/ff/7c985128f0514271b8268476af89aee6866df5eec04ac17dcfbc676213df/greenlet-3.3.0-cp313-cp313-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:7d2d9fd66bfadf230b385fdc90426fcd6eb64db54b40c495b72ac0feb5766c54", size = 610211, upload-time = "2025-12-04T14:57:43.968Z" }, + { url = "https://files.pythonhosted.org/packages/79/07/c47a82d881319ec18a4510bb30463ed6891f2ad2c1901ed5ec23d3de351f/greenlet-3.3.0-cp313-cp313-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:30a6e28487a790417d036088b3bcb3f3ac7d8babaa7d0139edbaddebf3af9492", size = 624311, upload-time = "2025-12-04T15:07:14.697Z" }, { url = "https://files.pythonhosted.org/packages/fd/8e/424b8c6e78bd9837d14ff7df01a9829fc883ba2ab4ea787d4f848435f23f/greenlet-3.3.0-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:087ea5e004437321508a8d6f20efc4cfec5e3c30118e1417ea96ed1d93950527", size = 612833, upload-time = "2025-12-04T14:26:03.669Z" }, { url = "https://files.pythonhosted.org/packages/b5/ba/56699ff9b7c76ca12f1cdc27a886d0f81f2189c3455ff9f65246780f713d/greenlet-3.3.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:ab97cf74045343f6c60a39913fa59710e4bd26a536ce7ab2397adf8b27e67c39", size = 1567256, upload-time = "2025-12-04T15:04:25.276Z" }, { url = "https://files.pythonhosted.org/packages/1e/37/f31136132967982d698c71a281a8901daf1a8fbab935dce7c0cf15f942cc/greenlet-3.3.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:5375d2e23184629112ca1ea89a53389dddbffcf417dad40125713d88eb5f96e8", size = 1636483, upload-time = "2025-12-04T14:27:30.804Z" }, @@ -966,6 +959,7 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/d7/7c/f0a6d0ede2c7bf092d00bc83ad5bafb7e6ec9b4aab2fbdfa6f134dc73327/greenlet-3.3.0-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:60c2ef0f578afb3c8d92ea07ad327f9a062547137afe91f38408f08aacab667f", size = 275671, upload-time = "2025-12-04T14:23:05.267Z" }, { url = "https://files.pythonhosted.org/packages/44/06/dac639ae1a50f5969d82d2e3dd9767d30d6dbdbab0e1a54010c8fe90263c/greenlet-3.3.0-cp314-cp314-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0a5d554d0712ba1de0a6c94c640f7aeba3f85b3a6e1f2899c11c2c0428da9365", size = 646360, upload-time = "2025-12-04T14:50:10.026Z" }, { url = "https://files.pythonhosted.org/packages/e0/94/0fb76fe6c5369fba9bf98529ada6f4c3a1adf19e406a47332245ef0eb357/greenlet-3.3.0-cp314-cp314-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:3a898b1e9c5f7307ebbde4102908e6cbfcb9ea16284a3abe15cab996bee8b9b3", size = 658160, upload-time = "2025-12-04T14:57:45.41Z" }, + { url = "https://files.pythonhosted.org/packages/93/79/d2c70cae6e823fac36c3bbc9077962105052b7ef81db2f01ec3b9bf17e2b/greenlet-3.3.0-cp314-cp314-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:dcd2bdbd444ff340e8d6bdf54d2f206ccddbb3ccfdcd3c25bf4afaa7b8f0cf45", size = 671388, upload-time = "2025-12-04T15:07:15.789Z" }, { url = "https://files.pythonhosted.org/packages/b8/14/bab308fc2c1b5228c3224ec2bf928ce2e4d21d8046c161e44a2012b5203e/greenlet-3.3.0-cp314-cp314-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5773edda4dc00e173820722711d043799d3adb4f01731f40619e07ea2750b955", size = 660166, upload-time = "2025-12-04T14:26:05.099Z" }, { url = "https://files.pythonhosted.org/packages/4b/d2/91465d39164eaa0085177f61983d80ffe746c5a1860f009811d498e7259c/greenlet-3.3.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:ac0549373982b36d5fd5d30beb8a7a33ee541ff98d2b502714a09f1169f31b55", size = 1615193, upload-time = "2025-12-04T15:04:27.041Z" }, { url = "https://files.pythonhosted.org/packages/42/1b/83d110a37044b92423084d52d5d5a3b3a73cafb51b547e6d7366ff62eff1/greenlet-3.3.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:d198d2d977460358c3b3a4dc844f875d1adb33817f0613f663a656f463764ccc", size = 1683653, upload-time = "2025-12-04T14:27:32.366Z" }, @@ -973,6 +967,7 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/a0/66/bd6317bc5932accf351fc19f177ffba53712a202f9df10587da8df257c7e/greenlet-3.3.0-cp314-cp314t-macosx_11_0_universal2.whl", hash = "sha256:d6ed6f85fae6cdfdb9ce04c9bf7a08d666cfcfb914e7d006f44f840b46741931", size = 282638, upload-time = "2025-12-04T14:25:20.941Z" }, { url = "https://files.pythonhosted.org/packages/30/cf/cc81cb030b40e738d6e69502ccbd0dd1bced0588e958f9e757945de24404/greenlet-3.3.0-cp314-cp314t-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d9125050fcf24554e69c4cacb086b87b3b55dc395a8b3ebe6487b045b2614388", size = 651145, upload-time = "2025-12-04T14:50:11.039Z" }, { url = "https://files.pythonhosted.org/packages/9c/ea/1020037b5ecfe95ca7df8d8549959baceb8186031da83d5ecceff8b08cd2/greenlet-3.3.0-cp314-cp314t-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:87e63ccfa13c0a0f6234ed0add552af24cc67dd886731f2261e46e241608bee3", size = 654236, upload-time = "2025-12-04T14:57:47.007Z" }, + { url = "https://files.pythonhosted.org/packages/69/cc/1e4bae2e45ca2fa55299f4e85854606a78ecc37fead20d69322f96000504/greenlet-3.3.0-cp314-cp314t-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:2662433acbca297c9153a4023fe2161c8dcfdcc91f10433171cf7e7d94ba2221", size = 662506, upload-time = "2025-12-04T15:07:16.906Z" }, { url = "https://files.pythonhosted.org/packages/57/b9/f8025d71a6085c441a7eaff0fd928bbb275a6633773667023d19179fe815/greenlet-3.3.0-cp314-cp314t-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3c6e9b9c1527a78520357de498b0e709fb9e2f49c3a513afd5a249007261911b", size = 653783, upload-time = "2025-12-04T14:26:06.225Z" }, { url = "https://files.pythonhosted.org/packages/f6/c7/876a8c7a7485d5d6b5c6821201d542ef28be645aa024cfe1145b35c120c1/greenlet-3.3.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:286d093f95ec98fdd92fcb955003b8a3d054b4e2cab3e2707a5039e7b50520fd", size = 1614857, upload-time = "2025-12-04T15:04:28.484Z" }, { url = "https://files.pythonhosted.org/packages/4f/dc/041be1dff9f23dac5f48a43323cd0789cb798342011c19a248d9c9335536/greenlet-3.3.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:6c10513330af5b8ae16f023e8ddbfb486ab355d04467c4679c5cfe4659975dd9", size = 1676034, upload-time = "2025-12-04T14:27:33.531Z" }, @@ -1002,35 +997,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/04/4b/29cac41a4d98d144bf5f6d33995617b185d14b22401f75ca86f384e87ff1/h11-0.16.0-py3-none-any.whl", hash = "sha256:63cf8bbe7522de3bf65932fda1d9c2772064ffb3dae62d55932da54b31cb6c86", size = 37515, upload-time = "2025-04-24T03:35:24.344Z" }, ] -[[package]] -name = "hf-xet" -version = "1.2.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/5e/6e/0f11bacf08a67f7fb5ee09740f2ca54163863b07b70d579356e9222ce5d8/hf_xet-1.2.0.tar.gz", hash = "sha256:a8c27070ca547293b6890c4bf389f713f80e8c478631432962bb7f4bc0bd7d7f", size = 506020, upload-time = "2025-10-24T19:04:32.129Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/9e/a5/85ef910a0aa034a2abcfadc360ab5ac6f6bc4e9112349bd40ca97551cff0/hf_xet-1.2.0-cp313-cp313t-macosx_10_12_x86_64.whl", hash = "sha256:ceeefcd1b7aed4956ae8499e2199607765fbd1c60510752003b6cc0b8413b649", size = 2861870, upload-time = "2025-10-24T19:04:11.422Z" }, - { url = "https://files.pythonhosted.org/packages/ea/40/e2e0a7eb9a51fe8828ba2d47fe22a7e74914ea8a0db68a18c3aa7449c767/hf_xet-1.2.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:b70218dd548e9840224df5638fdc94bd033552963cfa97f9170829381179c813", size = 2717584, upload-time = "2025-10-24T19:04:09.586Z" }, - { url = "https://files.pythonhosted.org/packages/a5/7d/daf7f8bc4594fdd59a8a596f9e3886133fdc68e675292218a5e4c1b7e834/hf_xet-1.2.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7d40b18769bb9a8bc82a9ede575ce1a44c75eb80e7375a01d76259089529b5dc", size = 3315004, upload-time = "2025-10-24T19:04:00.314Z" }, - { url = "https://files.pythonhosted.org/packages/b1/ba/45ea2f605fbf6d81c8b21e4d970b168b18a53515923010c312c06cd83164/hf_xet-1.2.0-cp313-cp313t-manylinux_2_28_aarch64.whl", hash = "sha256:cd3a6027d59cfb60177c12d6424e31f4b5ff13d8e3a1247b3a584bf8977e6df5", size = 3222636, upload-time = "2025-10-24T19:03:58.111Z" }, - { url = "https://files.pythonhosted.org/packages/4a/1d/04513e3cab8f29ab8c109d309ddd21a2705afab9d52f2ba1151e0c14f086/hf_xet-1.2.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:6de1fc44f58f6dd937956c8d304d8c2dea264c80680bcfa61ca4a15e7b76780f", size = 3408448, upload-time = "2025-10-24T19:04:20.951Z" }, - { url = "https://files.pythonhosted.org/packages/f0/7c/60a2756d7feec7387db3a1176c632357632fbe7849fce576c5559d4520c7/hf_xet-1.2.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:f182f264ed2acd566c514e45da9f2119110e48a87a327ca271027904c70c5832", size = 3503401, upload-time = "2025-10-24T19:04:22.549Z" }, - { url = "https://files.pythonhosted.org/packages/4e/64/48fffbd67fb418ab07451e4ce641a70de1c40c10a13e25325e24858ebe5a/hf_xet-1.2.0-cp313-cp313t-win_amd64.whl", hash = "sha256:293a7a3787e5c95d7be1857358a9130694a9c6021de3f27fa233f37267174382", size = 2900866, upload-time = "2025-10-24T19:04:33.461Z" }, - { url = "https://files.pythonhosted.org/packages/e2/51/f7e2caae42f80af886db414d4e9885fac959330509089f97cccb339c6b87/hf_xet-1.2.0-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:10bfab528b968c70e062607f663e21e34e2bba349e8038db546646875495179e", size = 2861861, upload-time = "2025-10-24T19:04:19.01Z" }, - { url = "https://files.pythonhosted.org/packages/6e/1d/a641a88b69994f9371bd347f1dd35e5d1e2e2460a2e350c8d5165fc62005/hf_xet-1.2.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:2a212e842647b02eb6a911187dc878e79c4aa0aa397e88dd3b26761676e8c1f8", size = 2717699, upload-time = "2025-10-24T19:04:17.306Z" }, - { url = "https://files.pythonhosted.org/packages/df/e0/e5e9bba7d15f0318955f7ec3f4af13f92e773fbb368c0b8008a5acbcb12f/hf_xet-1.2.0-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:30e06daccb3a7d4c065f34fc26c14c74f4653069bb2b194e7f18f17cbe9939c0", size = 3314885, upload-time = "2025-10-24T19:04:07.642Z" }, - { url = "https://files.pythonhosted.org/packages/21/90/b7fe5ff6f2b7b8cbdf1bd56145f863c90a5807d9758a549bf3d916aa4dec/hf_xet-1.2.0-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:29c8fc913a529ec0a91867ce3d119ac1aac966e098cf49501800c870328cc090", size = 3221550, upload-time = "2025-10-24T19:04:05.55Z" }, - { url = "https://files.pythonhosted.org/packages/6f/cb/73f276f0a7ce46cc6a6ec7d6c7d61cbfe5f2e107123d9bbd0193c355f106/hf_xet-1.2.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:66e159cbfcfbb29f920db2c09ed8b660eb894640d284f102ada929b6e3dc410a", size = 3408010, upload-time = "2025-10-24T19:04:28.598Z" }, - { url = "https://files.pythonhosted.org/packages/b8/1e/d642a12caa78171f4be64f7cd9c40e3ca5279d055d0873188a58c0f5fbb9/hf_xet-1.2.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:9c91d5ae931510107f148874e9e2de8a16052b6f1b3ca3c1b12f15ccb491390f", size = 3503264, upload-time = "2025-10-24T19:04:30.397Z" }, - { url = "https://files.pythonhosted.org/packages/17/b5/33764714923fa1ff922770f7ed18c2daae034d21ae6e10dbf4347c854154/hf_xet-1.2.0-cp314-cp314t-win_amd64.whl", hash = "sha256:210d577732b519ac6ede149d2f2f34049d44e8622bf14eb3d63bbcd2d4b332dc", size = 2901071, upload-time = "2025-10-24T19:04:37.463Z" }, - { url = "https://files.pythonhosted.org/packages/96/2d/22338486473df5923a9ab7107d375dbef9173c338ebef5098ef593d2b560/hf_xet-1.2.0-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:46740d4ac024a7ca9b22bebf77460ff43332868b661186a8e46c227fdae01848", size = 2866099, upload-time = "2025-10-24T19:04:15.366Z" }, - { url = "https://files.pythonhosted.org/packages/7f/8c/c5becfa53234299bc2210ba314eaaae36c2875e0045809b82e40a9544f0c/hf_xet-1.2.0-cp37-abi3-macosx_11_0_arm64.whl", hash = "sha256:27df617a076420d8845bea087f59303da8be17ed7ec0cd7ee3b9b9f579dff0e4", size = 2722178, upload-time = "2025-10-24T19:04:13.695Z" }, - { url = "https://files.pythonhosted.org/packages/9a/92/cf3ab0b652b082e66876d08da57fcc6fa2f0e6c70dfbbafbd470bb73eb47/hf_xet-1.2.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3651fd5bfe0281951b988c0facbe726aa5e347b103a675f49a3fa8144c7968fd", size = 3320214, upload-time = "2025-10-24T19:04:03.596Z" }, - { url = "https://files.pythonhosted.org/packages/46/92/3f7ec4a1b6a65bf45b059b6d4a5d38988f63e193056de2f420137e3c3244/hf_xet-1.2.0-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:d06fa97c8562fb3ee7a378dd9b51e343bc5bc8190254202c9771029152f5e08c", size = 3229054, upload-time = "2025-10-24T19:04:01.949Z" }, - { url = "https://files.pythonhosted.org/packages/0b/dd/7ac658d54b9fb7999a0ccb07ad863b413cbaf5cf172f48ebcd9497ec7263/hf_xet-1.2.0-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:4c1428c9ae73ec0939410ec73023c4f842927f39db09b063b9482dac5a3bb737", size = 3413812, upload-time = "2025-10-24T19:04:24.585Z" }, - { url = "https://files.pythonhosted.org/packages/92/68/89ac4e5b12a9ff6286a12174c8538a5930e2ed662091dd2572bbe0a18c8a/hf_xet-1.2.0-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:a55558084c16b09b5ed32ab9ed38421e2d87cf3f1f89815764d1177081b99865", size = 3508920, upload-time = "2025-10-24T19:04:26.927Z" }, - { url = "https://files.pythonhosted.org/packages/cb/44/870d44b30e1dcfb6a65932e3e1506c103a8a5aea9103c337e7a53180322c/hf_xet-1.2.0-cp37-abi3-win_amd64.whl", hash = "sha256:e6584a52253f72c9f52f9e549d5895ca7a471608495c4ecaa6cc73dba2b24d69", size = 2905735, upload-time = "2025-10-24T19:04:35.928Z" }, -] - [[package]] name = "httpcore" version = "1.0.9" @@ -1088,27 +1054,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/2a/39/e50c7c3a983047577ee07d2a9e53faf5a69493943ec3f6a384bdc792deb2/httpx-0.28.1-py3-none-any.whl", hash = "sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad", size = 73517, upload-time = "2024-12-06T15:37:21.509Z" }, ] -[[package]] -name = "huggingface-hub" -version = "1.3.3" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "filelock" }, - { name = "fsspec" }, - { name = "hf-xet", marker = "platform_machine == 'AMD64' or platform_machine == 'aarch64' or platform_machine == 'amd64' or platform_machine == 'arm64' or platform_machine == 'x86_64'" }, - { name = "httpx" }, - { name = "packaging" }, - { name = "pyyaml" }, - { name = "shellingham" }, - { name = "tqdm" }, - { name = "typer-slim" }, - { name = "typing-extensions" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/02/c3/544cd4cdd4b3c6de8591b56bb69efc3682e9ac81e36135c02e909dd98c5b/huggingface_hub-1.3.3.tar.gz", hash = "sha256:f8be6f468da4470db48351e8c77d6d8115dff9b3daeb30276e568767b1ff7574", size = 627649, upload-time = "2026-01-22T13:59:46.931Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/48/e8/0d032698916b9773b710c46e3b8e0154fc34cd017b151cc316c84c6c34fe/huggingface_hub-1.3.3-py3-none-any.whl", hash = "sha256:44af7b62380efc87c1c3bde7e1bf0661899b5bdfca1fc60975c61ee68410e10e", size = 536604, upload-time = "2026-01-22T13:59:45.391Z" }, -] - [[package]] name = "idna" version = "3.11" @@ -1174,7 +1119,6 @@ dependencies = [ { name = "fastapi" }, { name = "gymnasium" }, { name = "httpx" }, - { name = "huggingface-hub" }, { name = "metaworld" }, { name = "mujoco" }, { name = "numpy" }, @@ -1203,14 +1147,13 @@ requires-dist = [ { name = "aiohttp", specifier = ">=3.9.0" }, { name = "alembic", specifier = ">=1.14.0" }, { name = "asyncpg", specifier = ">=0.30.0" }, - { name = "basilica-sdk", specifier = "==0.15.0" }, + { name = "basilica-sdk", specifier = "==0.19.0" }, { name = "bittensor", specifier = ">=10.1.0" }, { name = "bittensor-wallet", specifier = ">=4.0.0" }, { name = "docker", specifier = ">=7.0.0" }, { name = "fastapi", specifier = ">=0.115.0" }, { name = "gymnasium", specifier = ">=1.1" }, { name = "httpx", specifier = ">=0.25.0" }, - { name = "huggingface-hub", specifier = ">=0.20.0" }, { name = "metaworld", git = "https://github.com/Farama-Foundation/Metaworld.git?rev=master" }, { name = "mujoco", specifier = ">=3.0.0" }, { name = "numpy", specifier = ">=2.0.1,<2.3" }, @@ -2342,18 +2285,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/44/6f/7120676b6d73228c96e17f1f794d8ab046fc910d781c8d151120c3f1569e/toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b", size = 16588, upload-time = "2020-11-01T01:40:20.672Z" }, ] -[[package]] -name = "tqdm" -version = "4.67.1" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "colorama", marker = "sys_platform == 'win32'" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/a8/4b/29b4ef32e036bb34e4ab51796dd745cdba7ed47ad142a9f4a1eb8e0c744d/tqdm-4.67.1.tar.gz", hash = "sha256:f8aef9c52c08c13a65f30ea34f4e5aac3fd1a34959879d7e59e63027286627f2", size = 169737, upload-time = "2024-11-24T20:12:22.481Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl", hash = "sha256:26445eca388f82e72884e0d580d5464cd801a3ea01e63e5601bdff9ba6a48de2", size = 78540, upload-time = "2024-11-24T20:12:19.698Z" }, -] - [[package]] name = "ty" version = "0.0.14" @@ -2393,19 +2324,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/7a/ed/d6fca788b51d0d4640c4bc82d0e85bad4b49809bca36bf4af01b4dcb66a7/typer-0.23.0-py3-none-any.whl", hash = "sha256:79f4bc262b6c37872091072a3cb7cb6d7d79ee98c0c658b4364bdcde3c42c913", size = 56668, upload-time = "2026-02-11T15:22:21.075Z" }, ] -[[package]] -name = "typer-slim" -version = "0.21.1" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "click" }, - { name = "typing-extensions" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/17/d4/064570dec6358aa9049d4708e4a10407d74c99258f8b2136bb8702303f1a/typer_slim-0.21.1.tar.gz", hash = "sha256:73495dd08c2d0940d611c5a8c04e91c2a0a98600cbd4ee19192255a233b6dbfd", size = 110478, upload-time = "2026-01-06T11:21:11.176Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/c8/0a/4aca634faf693e33004796b6cee0ae2e1dba375a800c16ab8d3eff4bb800/typer_slim-0.21.1-py3-none-any.whl", hash = "sha256:6e6c31047f171ac93cc5a973c9e617dbc5ab2bddc4d0a3135dc161b4e2020e0d", size = 47444, upload-time = "2026-01-06T11:21:12.441Z" }, -] - [[package]] name = "typing-extensions" version = "4.15.0"