Skip to content

πŸŽ₯ Turn one long video into 10 viral clips – 10x faster! πŸš€ Make your content shareable in seconds with Clipify, the easiest real-time video processing tool.

License

Notifications You must be signed in to change notification settings

adelelawady/Clipify

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

58 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Clipify Logo

Clipify

An AI-powered video processing toolkit for creating social media-optimized content with automated transcription, captioning, and thematic segmentation.

Development Status PyPI version Python License Downloads GitHub stars Documentation Status Code style: black

🌟 Key Features

Content Processing

  • Video Processing Pipeline
    • Automated audio extraction and speech-to-text conversion
    • Smart thematic segmentation using AI
    • Mobile-optimized format conversion (9:16, 4:5, 1:1)
    • Intelligent caption generation and overlay

AI Capabilities

  • Advanced Analysis
    • Context-aware content segmentation
    • Dynamic title generation
    • Smart keyword and hashtag extraction
    • Sentiment analysis for content optimization

Platform Options

  • Desktop Application

    • Intuitive graphical interface
    • Drag-and-drop functionality
    • Real-time processing feedback
    • Batch processing capabilities
  • Server Deployment

    • RESTful API integration
    • Asynchronous processing with webhooks
    • Multi-tenant architecture
    • Containerized deployment support

πŸš€ Quick Start

Desktop Application

πŸš€ Check out our full project based on Clipify on https://github.com/adelelawady/Clipify-hub πŸš€

Download and install the latest version:

Download Installable Download Server

Python Package Installation

# Via pip
pip install clipify

# From source
git clone https://github.com/adelelawady/Clipify.git
cd Clipify
pip install -r requirements.txt

πŸ’» Usage Examples

Basic Implementation

from clipify.core.clipify import Clipify

# Initialize with basic configuration
clipify = Clipify(
    provider_name="hyperbolic",
    api_key="your-api-key",
    model="deepseek-ai/DeepSeek-V3",
    convert_to_mobile=True,
    add_captions=True
)

# Process video
result = clipify.process_video("input.mp4")

# Handle results
if result:
    print(f"Created {len(result['segments'])} segments")
    for segment in result['segments']:
        print(f"Segment {segment['segment_number']}: {segment['title']}")

Advanced Configuration

clipify = Clipify(
    # AI Configuration
    provider_name="hyperbolic",
    api_key="your-api-key",
    model="deepseek-ai/DeepSeek-V3",
    max_tokens=5048,
    temperature=0.7,
    
    # Video Processing
    convert_to_mobile=True,
    add_captions=True,
    mobile_ratio="9:16",
    
    # Caption Styling
    caption_options={
        "font": "Bangers-Regular.ttf",
        "font_size": 60,
        "font_color": "white",
        "stroke_width": 2,
        "stroke_color": "black",
        "highlight_current_word": True,
        "word_highlight_color": "red",
        "shadow_strength": 0.8,
        "shadow_blur": 0.08,
        "line_count": 1,
        "padding": 50,
        "position": "bottom"
    }
)

AudioExtractor

from clipify.audio.extractor import AudioExtractor

# Initialize audio extractor
extractor = AudioExtractor()

# Extract audio from video
audio_path = extractor.extract_audio(
    video_path="input_video.mp4",
    output_path="extracted_audio.wav"
)

if audio_path:
    print(f"Audio successfully extracted to: {audio_path}")

SpeechToText

from clipify.audio.speech import SpeechToText

# Initialize speech to text converter
converter = SpeechToText(model_size="base")  # Options: tiny, base, small, medium, large

# Convert audio to text with timing
result = converter.convert_to_text("audio_file.wav")

if result:
    print("Transcript:", result['text'])
    print("\nWord Timings:")
    for word in result['word_timings'][:5]:  # Show first 5 words
        print(f"Word: {word['text']}")
        print(f"Time: {word['start']:.2f}s - {word['end']:.2f}s")

VideoConverter

from clipify.video.converter import VideoConverter

# Initialize video converter
converter = VideoConverter()

# Convert video to mobile format with blurred background
result = converter.convert_to_mobile(
    input_video="landscape_video.mp4",
    output_video="mobile_video.mp4",
    target_ratio="9:16"  # Options: "1:1", "4:5", "9:16"
)

if result:
    print("Video successfully converted to mobile format")

VideoConverterStretch

from clipify.video.converterStretch import VideoConverterStretch

# Initialize stretch converter
stretch_converter = VideoConverterStretch()

# Convert video using stretch method
result = stretch_converter.convert_to_mobile(
    input_video="landscape.mp4",
    output_video="stretched.mp4",
    target_ratio="4:5"  # Options: "1:1", "4:5", "9:16"
)

if result:
    print("Video successfully converted using stretch method")

VideoProcessor

from clipify.video.processor import VideoProcessor

# Initialize video processor with caption styling
processor = VideoProcessor(
    # Font settings
    font="Bangers-Regular.ttf",
    font_size=60,
    font_color="white",
    
    # Text effects
    stroke_width=2,
    stroke_color="black",
    shadow_strength=0.8,
    shadow_blur=0.08,
    
    # Caption behavior
    highlight_current_word=True,
    word_highlight_color="red",
    line_count=1,
    padding=50,
    position="bottom"  # Options: "bottom", "top", "center"
)

# Process video with captions
result = processor.process_video(
    input_video="input_video.mp4",
    output_video="captioned_output.mp4",
    use_local_whisper="auto"  # Options: "auto", True, False
)

if result:
    print("Video successfully processed with captions")

# Process multiple video segments
segment_files = ["segment1.mp4", "segment2.mp4", "segment3.mp4"]
processed_segments = processor.process_video_segments(
    segment_files=segment_files,
    output_dir="processed_segments"
)

The VideoProcessor provides powerful captioning capabilities:

  • Customizable font styling and text effects
  • Word-level highlighting for better readability
  • Shadow and stroke effects for visibility
  • Automatic speech recognition using Whisper
  • Support for batch processing multiple segments

VideoCutter

from clipify.video.cutter import VideoCutter

# Initialize video cutter
cutter = VideoCutter()

# Cut a specific segment
result = cutter.cut_video(
    input_video="full_video.mp4",
    output_video="segment.mp4",
    start_time=30.5,  # Start at 30.5 seconds
    end_time=45.2     # End at 45.2 seconds
)

if result:
    print("Video segment successfully cut")

SmartTextProcessor

from clipify.core.text_processor import SmartTextProcessor
from clipify.core.ai_providers import HyperbolicAI

# Initialize AI provider and text processor
ai_provider = HyperbolicAI(api_key="your_api_key")
processor = SmartTextProcessor(ai_provider)

# Process text content
text = "Your long text content here..."
segments = processor.segment_by_theme(text)

if segments:
    for segment in segments['segments']:
        print(f"\nTitle: {segment['title']}")
        print(f"Keywords: {', '.join(segment['keywords'])}")
        print(f"Content length: {len(segment['content'])} chars")

πŸ“¦ Project Structure

clipify/
β”œβ”€β”€ clipify/
β”‚   β”œβ”€β”€ __init__.py           # Package exports and version info
β”‚   β”œβ”€β”€ core/
β”‚   β”‚   β”œβ”€β”€ __init__.py       # Core module exports
β”‚   β”‚   β”œβ”€β”€ clipify.py        # Main Clipify class implementation
β”‚   β”‚   β”œβ”€β”€ processor.py      # Content processing and segmentation
β”‚   β”‚   β”œβ”€β”€ text_processor.py # Text analysis and theme detection
β”‚   β”‚   └── ai_providers.py  # AI providers (OpenAI, Anthropic, Hyperbolic)
β”‚   β”œβ”€β”€ video/
β”‚   β”‚   β”œβ”€β”€ __init__.py       # Video module exports
β”‚   β”‚   β”œβ”€β”€ processor.py      # Video captioning and effects
β”‚   β”‚   β”œβ”€β”€ converter.py      # Mobile format with blur background
β”‚   β”‚   β”œβ”€β”€ converter_stretch.py  # Stretch-based format conversion
β”‚   β”‚   └── cutter.py         # Video segment extraction
β”‚   β”œβ”€β”€ audio/
β”‚   β”‚   β”œβ”€β”€ __init__.py       # Audio module exports
β”‚   β”‚   β”œβ”€β”€ extractor.py      # FFmpeg-based audio extraction
β”‚   β”‚   └── speech.py         # Whisper speech recognition
β”œβ”€β”€ scripts/
β”‚   β”œβ”€β”€ build.sh              # Package build script
β”‚   └── publish.sh            # PyPI publishing script
β”œβ”€β”€ .gitignore                # Git ignore patterns
β”œβ”€β”€ LICENSE                 # MIT License
β”œβ”€β”€ MANIFEST.in             # Package manifest
β”œβ”€β”€ README.md               # Project documentation
β”œβ”€β”€ requirements.txt        # Project dependencies
└── setup.py  # Package configuration


πŸ› οΈ Configuration Options

AI Providers

  • hyperbolic: Default provider with DeepSeek-V3 model
  • openai: OpenAI GPT models support
  • anthropic: Anthropic Claude models
  • ollama: Local model deployment

Video Formats

  • Aspect Ratios: 1:1, 4:5, 9:16
  • Output Formats: MP4, MOV
  • Quality Presets: Low, Medium, High

Caption Customization

  • Font customization
  • Color schemes
  • Position options
  • Animation effects
  • Word highlighting

🀝 Contributing

We welcome contributions! Here's how you can help:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit changes (git commit -m 'Add amazing feature')
  4. Push to branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Please read our Contributing Guidelines for details.

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

🌐 Support

πŸ™ Acknowledgments

  • FFmpeg for video processing
  • OpenAI for AI capabilities
  • PyTorch community
  • All contributors and supporters

Buy me a coffee

About

πŸŽ₯ Turn one long video into 10 viral clips – 10x faster! πŸš€ Make your content shareable in seconds with Clipify, the easiest real-time video processing tool.

Resources

License

Stars

Watchers

Forks