An HTTP API to classify Replicate models into Hugging Face tasks using a language model.
Powered by:
- Cloudflare Workers for hosting the HTTP API
- Cloudflare KV for caching
- Hono for authoring the HTTP API
- Anthropic Claude 3.7 Sonnet for model classification
- Replicate API for model metadata
- Hugging Face Tasks for model task metadata
Repository: https://github.com/zeke/replicate-model-classifier
Base URL: https://replicate-model-classifier.ziki.workers.dev/
GET /api/models/:owner/:model
Returns a JSON object with the model classification:
{
"model": "salesforce/blip",
"classification": {
"summary": "Generate image captions and answer questions about images",
"inputTypes": ["image", "text"],
"outputTypes": ["text"],
"task": "visual-question-answering",
"taskSummary": "Visual Question Answering is the task of answering open-ended questions based on an image. They output natural language responses to natural language questions."
}
}
Examples
- /api/models/bytedance/sdxl-lightning-4step
- /api/models/meta/meta-llama-3-8b-instruct
- /api/models/black-forest-labs/flux-schnell
- /api/models/salesforce/blip
- /api/models/meta/meta-llama-3-70b-instruct
- /api/models/stability-ai/stable-diffusion
- /api/models/abiruyt/text-extract-ocr
- /api/models/tencentarc/gfpgan
- /api/models/andreasjansson/clip-features
- /api/models/stability-ai/sdxl
To get a pretty-printed view of the prompt that was used to classify the model, add the prompt
query parameter:
GET /api/models/:owner/:model?prompt=1
Examples
- /api/models/wavespeedai/wan-2.1-i2v-480p?prompt=1
- /api/models/meta/meta-llama-3-8b-instruct?prompt=1
- /api/models/black-forest-labs/flux-schnell?prompt=1
It can be helpful to see all the data that goes into the model classification. You can see all the data by adding the debug
query parameter:
GET /api/models/:owner/:model?debug=1
Examples:
- /api/models/wavespeedai/wan-2.1-i2v-480p?debug=1
- /api/models/meta/meta-llama-3-8b-instruct?debug=1
- /api/models/black-forest-labs/flux-schnell?debug=1
Responses are cached forever by default. To bust the cache for a specific model, use the force
query parameter:
GET /api/models/:owner/:model?force=1
GET /api/tasks
See /api/tasks
GET /api/taskNames
See /api/taskNames
GET /api/classifications
Returns a JSON object containing all cached model classifications. Each key is the model identifier (owner/modelName) and the value is the classification data.
You can filter the results by task type using the task
query parameter:
GET /api/classifications?task=text-generation
Examples:
- All text generation models
- All image-to-image models
- All text-to-image models
- All visual question answering models
- All image classification models
For a list of all available task types, see /api/taskNames.
Example response:
{
"salesforce/blip": {
"summary": "Generate image captions and answer questions about images",
"inputTypes": ["image", "text"],
"outputTypes": ["text"],
"task": "visual-question-answering",
"taskSummary": "Visual Question Answering is the task of answering open-ended questions based on an image. They output natural language responses to natural language questions."
},
"meta/meta-llama-3-8b-instruct": {
"summary": "A large language model for text generation and instruction following",
"inputTypes": ["text"],
"outputTypes": ["text"],
"task": "text-generation",
"taskSummary": "Text generation is the task of generating text that is coherent and contextually relevant."
}
}