Omron represents a significant stride in enhancing the Bittensor network, aiming to create the world's largest peer-to-peer Verified Intelligence network, by building a Proof-of-Inference system for the Bittensor network. This initiative aligns with the Opentensor foundation's criteria for innovative subnet solutions. zk-ML allows AI models to be converted into a unique 'fingerprint,' a circuit that can be used to verify that a model's prediction was generated by a specific AI model, thereby providing what we term as Proof-of-Inference.
Omron incentivizes miners and validators on Subnet 2 to contribute to the generation and validation of high-quality, secure, and efficient verified AI predictions using a specialized reward mechanism that aligns with the unique aspects of zero-knowledge machine learning (zk-ML) and decentralized AI. Currently Zero-knowledge proofs are generally more CPU computationally intensive and opens the opportunity for non-GPU miners to participate however the end goal is to further incentivize the development of proving systems optimized for GPU based operations. The incentives are based around Miners creating succinct and efficient models which can be circuitized with a zero-knowledge proving system.
The reward mechanism for Subnet 2 scores the initial AI predictions based on the cryptographic integrity and time to generate zk-proofs along with the outputs, rather than solely on end results. This approach reduces the computational burden on validators as zk-proofs confirm the source model and integrity of AI predictions efficiently.
- Receive input data from validators on the subnet.
- Generate predictions using custom, verifiable AI models that have been converted into zero knowledge circuits
- Return the generated content to the requesting validator for validation and distribution.
- Produce input data and distribute requests for verified inference throughout miners participating on the subnet
- Confirm that miners are acting faithfully, by verifying the authenticity of the miner's returned zero knowledge proof
- Score results from miners based on performance metrics such as proof size and response time
Run the below command to install Omron and it's dependencies.
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/inference-labs-inc/omron-subnet/main/setup.sh)"
btcli subnet register --subtensor.network finney --netuid 2 --wallet.name {your_coldkey} --wallet.hotkey {your_hotkey}
Docker Instructions (not supported during the competition running from Feb 6 - Apr 20)
#### With docker compose (recommended)---
services:
omron-miner:
image: ghcr.io/inference-labs-inc/omron:latest
restart: unless-stopped
ports:
- 8091:8091
volumes:
# Update this path to your .bittensor directory
# Note: use /root/.bittensor instead of /home/ubuntu/.bittensor if you set PUID to 0
- {path_to_your_.bittensor_directory}:/home/ubuntu/.bittensor
environment:
# This UID needs to be able to read/write to your .bittensor directory, either update the UID or the directory permissions
- PUID=1000
labels:
- com.centurylinklabs.watchtower.enable=true # Enables Watchtower for this container
command: miner.py --wallet.name {your_miner_key_name} --wallet.hotkey {your_miner_hotkey_name} --netuid 2
# Use Watchtower automatically update containers
watchtower:
image: containrrr/watchtower:latest
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
command: --interval 60 --cleanup --label-enable
docker run -d \
--name omron-miner \
-p 8091:8091 \
-v {path_to_your_.bittensor_directory}:/home/ubuntu/.bittensor \
-e PUID=1000 \
--restart unless-stopped \
ghcr.io/inference-labs-inc/omron:latest \
miner.py \
--wallet.name {your_miner_key_name} \
--wallet.hotkey {your_miner_hotkey_name} \
--netuid 2
Important
Ensure you are within the ./neurons
directory before using the commands below to start your miner
cd neurons
pm2 start miner.py --name miner --interpreter ../.venv/bin/python --kill-timeout 3000 -- \
--netuid 2 \
--wallet.name {your_miner_key_name} \
--wallet.hotkey {your_miner_hotkey_name}
Or run this command with make pm2-miner WALLET_NAME={your_miner_key_name} HOTKEY_NAME={your_miner_hotkey_name}
Docker Instructions (not supported during the competition running from Feb 6 - Apr 20)
#### With docker compose (recommended)---
services:
omron-validator:
image: ghcr.io/inference-labs-inc/omron:latest
restart: unless-stopped
ports:
- 8443:8443
- 9090:9090 # In case you use prometheus monitoring
volumes:
# Update this path to your .bittensor directory
# Note: use /root/.bittensor instead of /home/ubuntu/.bittensor if you set PUID to 0
- {path_to_your_.bittensor_directory}:/home/ubuntu/.bittensor
environment:
# This UID needs to be able to read/write to your .bittensor directory, either update the UID or the directory permissions
- PUID=1000
labels:
- com.centurylinklabs.watchtower.enable=true # Enables Watchtower for this container
command: validator.py --wallet.name {validator_key_name} --wallet.hotkey {validator_hot_key_name} --netuid 2
# Use Watchtower automatically update containers
watchtower:
image: containrrr/watchtower:latest
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
command: --interval 60 --cleanup --label-enable
docker run -d \
--name omron-validator \
-p 8443:8443 \
-p 9090:9090 \
-v {path_to_your_.bittensor_directory}:/home/ubuntu/.bittensor \
-e PUID=1000 \
--restart unless-stopped \
ghcr.io/inference-labs-inc/omron:latest \
validator.py \
--wallet.name {validator_key_name} \
--wallet.hotkey {validator_hot_key_name} \
--netuid 2
Important
Ensure you are within the ./neurons
directory before using the commands below to start your validator
cd neurons
pm2 start validator.py --name validator --interpreter ../.venv/bin/python --kill-timeout 3000 -- \
--netuid 2 \
--wallet.name {your_validator_key_name} \
--wallet.hotkey {your_validator_hotkey_name}
Or run this command with make pm2-validator WALLET_NAME={validator_key_name} HOTKEY_NAME={validator_hot_key_name}
Miners contribute to this subnet by providing compute to generate output from, and prove AI model inferences. Miners receive workloads from validators in the form of input data, perform verified inferences on those inputs and respond with output along with a zero knowledge proof of inference.
Important
As of February 2025, the miner should be run on a bare-metal MacOS machine with support for Metal GPU acceleration, to optimize for performance.
Component | Requirement |
---|---|
CPU | 8 core 3.2GHz |
RAM | 32GB |
Network Up | 400Mbps |
Network Down | 400Mbps |
Storage | 100GB |
Note
Exceeding these requirements in terms of storage, network and CPU speed will most likely result in higher rewards due to performance incentivization.
Component | Recommendation |
---|---|
CPU | 8 core 3.6GHz |
RAM | 64GB |
Network Up | 1Gbps |
Network Down | 1Gbps |
Storage | 400GB |
Storage Medium | SSD |
Validators are responsible for verifying model outputs as provided by miners, and updating that miner's score based on the verification results.
Important
As of February 2025, the validator must be run on a bare-metal MacOS machine with support for Metal GPU acceleration.
Though AWS instances for metal are recommended, any MacOS machine with a Metal GPU is sufficient.
Component | Requirement |
---|---|
Instance | mac2-m2pro.metal (AWS) |
CPU | Apple M2 Pro (12-core) |
RAM | 32GB |
Network Up | 10Gbps |
Network Down | 10Gbps |
Storage | 2TB SSD |
Component | Recommendation |
---|---|
Instance | mac2-m1ultra.metal (AWS) |
CPU | Apple M1 Ultra (20-core) |
RAM | 128GB |
Network Up | 10Gbps |
Network Down | 10Gbps |
Storage | 2TB+ SSD |