Open Source
Explore the latest AI open-source projects from GitHub and HuggingFace.
Explore the latest AI open-source projects from GitHub and HuggingFace.
Pixelle-Video is an open-source AI-powered automated short video generation engine developed by AIDC-AI. It transforms a single topic input into a complete video by orchestrating scriptwriting, visual generation, voice synthesis, and music composition in a fully automated pipeline. The project has accumulated 2,500 GitHub stars and 420 forks, with active development including a motion transfer module added in January 2026. Built on the ComfyUI framework, Pixelle-Video provides modular workflow design that lets users customize each stage of the video creation process. It supports multiple LLM providers (GPT, Qwen, DeepSeek, Ollama) for script generation and integrates multiple TTS engines for narration, making it a flexible foundation for automated content production. ## End-to-End Automated Pipeline Pixelle-Video's core workflow follows four stages: scriptwriting, visual planning, frame-by-frame processing, and final video composition. Given only a topic string, the system generates a compelling narrative, plans visual assets for each scene, processes individual frames using AI image and video generation models, and assembles the final output with synchronized audio. No manual intervention is required between stages. ## Multi-Model LLM Integration The scriptwriting engine supports GPT, Qwen, DeepSeek, and Ollama as interchangeable backends. Users can select the model that best fits their budget, latency requirements, or content style. Local model support through Ollama means the entire pipeline can run without external API calls, which is significant for privacy-sensitive content production or offline environments. ## Voice Synthesis and Digital Avatar Pixelle-Video integrates Edge-TTS, Index-TTS, and voice cloning capabilities for narration. The January 2026 update added digital avatar narration, enabling AI-generated presenters to deliver scripts with lip-synced video. This feature targets creators who want talking-head style content without recording actual footage. ## Motion Transfer Module The latest addition (January 26, 2026) is a motion transfer module that applies motion patterns from reference videos to AI-generated content. This enables more dynamic and natural-looking video output by transferring real-world movement characteristics to synthesized scenes. ## Template System and Format Flexibility Multiple visual templates cover different content categories, and the system supports both portrait and landscape video orientations. This makes Pixelle-Video suitable for platforms ranging from TikTok and Instagram Reels (portrait) to YouTube and web embeds (landscape). A Windows all-in-one package requires no Python installation, while Docker Compose and source code installation options are available for Linux and macOS.