Skip to content

Commit 7f42b9b

Browse files
authored
Merge pull request #17193 from BerriAI/litellm_twelvelabs_int
Added support for twelvelabs pegasus
2 parents b85df0b + 9d05839 commit 7f42b9b

File tree

7 files changed

+522
-0
lines changed

7 files changed

+522
-0
lines changed

docs/my-website/docs/providers/bedrock.md

Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1683,6 +1683,131 @@ curl --location 'http://0.0.0.0:4000/chat/completions' \
16831683
</TabItem>
16841684
</Tabs>
16851685

1686+
## TwelveLabs Pegasus - Video Understanding
1687+
1688+
TwelveLabs Pegasus 1.2 is a video understanding model that can analyze and describe video content. LiteLLM supports this model through Bedrock's `/invoke` endpoint.
1689+
1690+
| Property | Details |
1691+
|----------|---------|
1692+
| Provider Route | `bedrock/us.twelvelabs.pegasus-1-2-v1:0`, `bedrock/eu.twelvelabs.pegasus-1-2-v1:0` |
1693+
| Provider Documentation | [TwelveLabs Pegasus Docs ↗](https://docs.twelvelabs.io/docs/models/pegasus) |
1694+
| Supported Parameters | `max_tokens`, `temperature`, `response_format` |
1695+
| Media Input | S3 URI or base64-encoded video |
1696+
1697+
### Supported Features
1698+
1699+
- **Video Analysis**: Analyze video content from S3 or base64 input
1700+
- **Structured Output**: Support for JSON schema response format
1701+
- **S3 Integration**: Support for S3 video URLs with bucket owner specification
1702+
1703+
### Usage with S3 Video
1704+
1705+
<Tabs>
1706+
<TabItem value="sdk" label="SDK">
1707+
1708+
```python title="TwelveLabs Pegasus SDK Usage" showLineNumbers
1709+
from litellm import completion
1710+
import os
1711+
1712+
# Set AWS credentials
1713+
os.environ["AWS_ACCESS_KEY_ID"] = "your-aws-access-key"
1714+
os.environ["AWS_SECRET_ACCESS_KEY"] = "your-aws-secret-key"
1715+
os.environ["AWS_REGION_NAME"] = "us-east-1"
1716+
1717+
response = completion(
1718+
model="bedrock/us.twelvelabs.pegasus-1-2-v1:0",
1719+
messages=[{"role": "user", "content": "Describe what happens in this video."}],
1720+
mediaSource={
1721+
"s3Location": {
1722+
"uri": "s3://your-bucket/video.mp4",
1723+
"bucketOwner": "123456789012", # 12-digit AWS account ID
1724+
}
1725+
},
1726+
temperature=0.2
1727+
)
1728+
1729+
print(response.choices[0].message.content)
1730+
```
1731+
1732+
</TabItem>
1733+
1734+
<TabItem value="proxy" label="Proxy">
1735+
1736+
**1. Add to config**
1737+
1738+
```yaml title="config.yaml" showLineNumbers
1739+
model_list:
1740+
- model_name: pegasus-video
1741+
litellm_params:
1742+
model: bedrock/us.twelvelabs.pegasus-1-2-v1:0
1743+
aws_access_key_id: os.environ/AWS_ACCESS_KEY_ID
1744+
aws_secret_access_key: os.environ/AWS_SECRET_ACCESS_KEY
1745+
aws_region_name: os.environ/AWS_REGION_NAME
1746+
```
1747+
1748+
**2. Start proxy**
1749+
1750+
```bash title="Start LiteLLM Proxy" showLineNumbers
1751+
litellm --config /path/to/config.yaml
1752+
1753+
# RUNNING at http://0.0.0.0:4000
1754+
```
1755+
1756+
**3. Test it!**
1757+
1758+
```bash title="Test Pegasus via Proxy" showLineNumbers
1759+
curl --location 'http://0.0.0.0:4000/chat/completions' \
1760+
--header 'Authorization: Bearer sk-1234' \
1761+
--header 'Content-Type: application/json' \
1762+
--data '{
1763+
"model": "pegasus-video",
1764+
"messages": [
1765+
{
1766+
"role": "user",
1767+
"content": "Describe what happens in this video."
1768+
}
1769+
],
1770+
"mediaSource": {
1771+
"s3Location": {
1772+
"uri": "s3://your-bucket/video.mp4",
1773+
"bucketOwner": "123456789012"
1774+
}
1775+
},
1776+
"temperature": 0.2
1777+
}'
1778+
```
1779+
1780+
</TabItem>
1781+
</Tabs>
1782+
1783+
### Usage with Base64 Video
1784+
1785+
You can also pass video content directly as base64:
1786+
1787+
```python title="Base64 Video Input" showLineNumbers
1788+
from litellm import completion
1789+
import base64
1790+
1791+
# Read video file and encode to base64
1792+
with open("video.mp4", "rb") as video_file:
1793+
video_base64 = base64.b64encode(video_file.read()).decode("utf-8")
1794+
1795+
response = completion(
1796+
model="bedrock/us.twelvelabs.pegasus-1-2-v1:0",
1797+
messages=[{"role": "user", "content": "What is happening in this video?"}],
1798+
mediaSource={
1799+
"base64String": video_base64
1800+
},
1801+
temperature=0.2,
1802+
)
1803+
1804+
print(response.choices[0].message.content)
1805+
```
1806+
1807+
### Important Notes
1808+
1809+
- **Response Format**: The model supports structured output via `response_format` with JSON schema
1810+
16861811
## Provisioned throughput models
16871812
To use provisioned throughput Bedrock models pass
16881813
- `model=bedrock/<base-model>`, example `model=bedrock/anthropic.claude-v2`. Set `model` to any of the [Supported AWS models](#supported-aws-bedrock-models)
@@ -1743,6 +1868,8 @@ Here's an example of using a bedrock model with LiteLLM. For a complete list, re
17431868
| Meta Llama 2 Chat 70b | `completion(model='bedrock/meta.llama2-70b-chat-v1', messages=messages)` | `os.environ['AWS_ACCESS_KEY_ID']`, `os.environ['AWS_SECRET_ACCESS_KEY']`, `os.environ['AWS_REGION_NAME']` |
17441869
| Mistral 7B Instruct | `completion(model='bedrock/mistral.mistral-7b-instruct-v0:2', messages=messages)` | `os.environ['AWS_ACCESS_KEY_ID']`, `os.environ['AWS_SECRET_ACCESS_KEY']`, `os.environ['AWS_REGION_NAME']` |
17451870
| Mixtral 8x7B Instruct | `completion(model='bedrock/mistral.mixtral-8x7b-instruct-v0:1', messages=messages)` | `os.environ['AWS_ACCESS_KEY_ID']`, `os.environ['AWS_SECRET_ACCESS_KEY']`, `os.environ['AWS_REGION_NAME']` |
1871+
| TwelveLabs Pegasus 1.2 (US) | `completion(model='bedrock/us.twelvelabs.pegasus-1-2-v1:0', messages=messages, mediaSource={...})` | `os.environ['AWS_ACCESS_KEY_ID']`, `os.environ['AWS_SECRET_ACCESS_KEY']`, `os.environ['AWS_REGION_NAME']` |
1872+
| TwelveLabs Pegasus 1.2 (EU) | `completion(model='bedrock/eu.twelvelabs.pegasus-1-2-v1:0', messages=messages, mediaSource={...})` | `os.environ['AWS_ACCESS_KEY_ID']`, `os.environ['AWS_SECRET_ACCESS_KEY']`, `os.environ['AWS_REGION_NAME']` |
17461873

17471874

17481875
## Bedrock Embedding

litellm/__init__.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1234,6 +1234,9 @@ def add_known_models():
12341234
from .llms.bedrock.chat.invoke_transformations.amazon_titan_transformation import (
12351235
AmazonTitanConfig,
12361236
)
1237+
from .llms.bedrock.chat.invoke_transformations.amazon_twelvelabs_pegasus_transformation import (
1238+
AmazonTwelveLabsPegasusConfig,
1239+
)
12371240
from .llms.bedrock.chat.invoke_transformations.base_invoke_transformation import (
12381241
AmazonInvokeConfig,
12391242
)

litellm/constants.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -856,6 +856,7 @@
856856
"nova",
857857
"deepseek_r1",
858858
"qwen3",
859+
"twelvelabs",
859860
]
860861

861862
BEDROCK_EMBEDDING_PROVIDERS_LITERAL = Literal[

0 commit comments

Comments
 (0)