A Model Context Protocol server for Luma AI's Dream Machine API.
This MCP server integrates with Luma AI's Dream Machine API (v1) to provide tools for generating, managing, and manipulating AI-generated videos and images via Large Language Models. It implements the Model Context Protocol (MCP) to enable seamless interaction between AI assistants and Luma's creative tools.
- Text-to-video generation
- Advanced video generation with keyframes
- Image-to-video conversion
- Video extension and interpolation
- Image generation with reference images
- Audio addition to videos
- Video upscaling
- Credit management
- Generation tracking and status checking
-
ping- Check if the Luma API is running
- No parameters required
-
create_generation- Creates a new video generation
- Input:
prompt(string, required): Text description of the video to generatemodel(string, optional): Model to use (default: "ray-2")- Available models: "ray-1-6", "ray-2", "ray-flash-2"
resolution(string, optional): Video resolution (choices: "540p", "720p", "1080p", "4k")duration(string, optional): Video duration (only "5s" and "9s" are currently supported)aspect_ratio(string, optional): Video aspect ratio (e.g., "16:9", "1:1", "9:16", "4:3", "3:4", "21:9", "9:21")loop(boolean, optional): Whether to make the video loopkeyframes(object, optional): Start and end frames for advanced video generation:frame0and/orframe1with either:{"type": "image", "url": "image_url"}for image keyframes{"type": "generation", "id": "generation_id"}for video keyframes
-
get_generation- Gets the status of a generation
- Input:
generation_id(string, required): ID of the generation to check
- Output includes:
- Generation ID
- State (queued, dreaming, completed, failed)
- Failure reason (if failed)
- Video URL (if completed)
-
list_generations- Lists all generations
- Input:
limit(number, optional): Maximum number of generations to return (default: 10)offset(number, optional): Number of generations to skip
-
delete_generation- Deletes a generation
- Input:
generation_id(string, required): ID of the generation to delete
-
upscale_generation- Upscales a video generation to higher resolution
- Input:
generation_id(string, required): ID of the generation to upscaleresolution(string, required): Target resolution for the upscaled video (one of "540p", "720p", "1080p", or "4k")
- Note:
- The generation must be in a completed state to be upscaled
- The target resolution must be higher than the original generation's resolution
- Each generation can only be upscaled once
-
add_audio- Adds AI-generated audio to a video generation
- Input:
generation_id(required): The ID of the generation to add audio toprompt(required): The prompt for the audio generationnegative_prompt(optional): The negative prompt for the audio generationcallback_url(optional): URL to notify when the audio processing is complete
-
generate_image- Generates an image from a text prompt with optional reference images
- Input:
prompt(string, required): Text description of the image to generatemodel(string, optional): Model to use for image generation (default: "photon-1")- Available models: "photon-1", "photon-flash-1"
aspect_ratio(string, optional): Image aspect ratio (same options as video)image_ref(array, optional): Reference images to guide generation- Each ref:
{"url": "image_url", "weight": optional_float}
- Each ref:
style_ref(array, optional): Style reference images- Each ref:
{"url": "image_url", "weight": optional_float}
- Each ref:
character_ref(object, optional): Character reference images- Format:
{"identity_name": {"images": ["url1", "url2", ...]}}
- Format:
modify_image_ref(object, optional): Image to modify- Format:
{"url": "image_url", "weight": optional_float}
- Format:
-
get_credits- Gets credit information for the current user
- No parameters required
- Returns available credit balance in USD cents
-
get_camera_motions- Gets all supported camera motions
- No parameters required
- Returns: List of available camera motion strings
-
Get your Luma API key from Luma AI (sign up or log in to get your API key)
-
Add this to your Claude Desktop configuration file:
- On macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - On Windows:
%APPDATA%\Claude\claude_desktop_config.json
{ "mcpServers": { "luma": { "command": "uv", "args": [ "run", "--project", "/path/to/your/luma-ai-mcp-server", "-m", "luma_ai_mcp_server" ], "env": { "LUMA_API_KEY": "your-luma-api-key-here" } } } }Replace:
/path/to/your/luma-ai-mcp-serverwith the actual path to your server directoryyour-luma-api-key-herewith your actual Luma API key
- On macOS:
-
Restart Claude Desktop
-
That's it! You can now use Luma AI tools directly in Claude Desktop conversations.
If you're having issues:
- Check your API key is correct
- Make sure the path to the server is correct
- View logs with:
tail -n 20 -f ~/Library/Logs/Claude/mcp*.log
The Luma API supports various types of advanced video generation through keyframes:
- Starting from an image: Provide
frame0withtype: "image"and an image URL - Ending with an image: Provide
frame1withtype: "image"and an image URL - Extending a video: Provide
frame0withtype: "generation"and a generation ID - Reverse extending a video: Provide
frame1withtype: "generation"and a generation ID - Interpolating between videos: Provide both
frame0andframe1withtype: "generation"and generation IDs
- Duration: Currently, the API only supports durations of "5s" or "9s"
- Resolution: Valid values are "540p", "720p", "1080p", and "4k"
- Models:
- Video generation:
- "ray-2" (default) - Best quality, slower
- "ray-flash-2" - Faster generation
- "ray-1-6" - Legacy model
- Image generation:
- "photon-1" (default) - Best quality, slower
- "photon-flash-1" - Faster generation
- Video generation:
- Generation types: Video, image, and advanced (with keyframes)
- Aspect Ratios: "1:1" (square), "16:9" (landscape), "9:16" (portrait), "4:3" (standard), "3:4" (standard portrait), "21:9" (ultrawide), "9:21" (ultrawide portrait)
- States: "queued", "dreaming", "completed", "failed"
- Upscaling:
- Video generations can only be upscaled when they're in a "complete" state
- Target resolution must be higher than the original generation's resolution
- Each generation can only be upscaled once
- API Key: Required in environment variables
- API Version: Uses Dream Machine API v1
MIT