Shard is an open-source library designed to provide observability and video understanding capabilities for Large Vision Models (LVMs) like OpenAI's Sora. It helps developers monitor, analyze, and understand the behavior and performance of vision models in production.
- LVM Performance Monitoring
- Video Understanding API
- Model Behavior Analysis
- Integration with Popular Vision Models
- Observability Metrics and Dashboards
from src.sdk import Shard
# Initialize the client
client = Shard("your_api_key_here")
# Interpret a video
response = client.interpret("/absolute/path/to/video.mp4")
# Analyze prompt adherence
analysis = client.gauge_prompt_adherance("/absolute/path/to/video.mp4", "your prompt here")
- Clone the repository:
git clone https://github.com/Shard-AI/Shard.git
cd Shard
- Set up your environment:
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
- Configure environment variables:
cp .env.example .env
# Edit .env with your configuration
source export_env.sh
- Run the API (in two terminals):
uvicorn src.app:app --host 0.0.0.0 --port 8000 --reload
celery -A src.core.async_worker worker --loglevel=info --pool=threads
Visit our documentation for detailed guides and API reference.
We welcome contributions! Please see our Contributing Guidelines for details on how to submit pull requests, report issues, and contribute to the project.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Contact us at [email protected]