Skip to content

fix: Use LLM to generate unique scene prompts for video extensions#318

Open
crowwdev wants to merge 1 commit intochenyme:mainfrom
crowwdev:fix-video-scene-repetition
Open

fix: Use LLM to generate unique scene prompts for video extensions#318
crowwdev wants to merge 1 commit intochenyme:mainfrom
crowwdev:fix-video-scene-repetition

Conversation

@crowwdev
Copy link

@crowwdev crowwdev commented Mar 12, 2026

Summary

Prevent scene repetition in video extensions by using LLM to generate unique prompts for each round.

Changes

  • Added _generate_scene_prompts_llm() in video.py using local Grok API instead of aiohttp
  • Uses curl_cffi.requests.AsyncSession (already available in container)
  • Each 6-second extension round gets a unique LLM-generated continuation prompt
  • Fallback to original prompt if LLM fails

Verification

  • 30-second video generates 5 distinct scenes
  • No repeated content between segments
  • Container starts without errors (no missing dependencies)

- Integrate grok-4.1-fast to generate unique scene descriptions for each video round
- Prevent scene repetition in 30-second videos by using different prompts per 6-second segment
- Add _generate_scene_prompts_llm() in video.py for base video generation
- Add _generate_scene_prompt_for_extend() in video_extend.py for manual extensions
- Each scene now has natural progression without repetition

Fixes chenyme#316
crowwdev

This comment was marked as spam.

@crowwdev crowwdev marked this pull request as draft March 17, 2026 11:37
@crowwdev crowwdev marked this pull request as ready for review March 17, 2026 11:37
@crowwdev crowwdev marked this pull request as draft March 17, 2026 11:37
@crowwdev crowwdev marked this pull request as ready for review March 17, 2026 11:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants