Open Source
Explore the latest AI open-source projects from GitHub and HuggingFace.
Explore the latest AI open-source projects from GitHub and HuggingFace.
PersonaLive is a real-time, streamable diffusion framework for generating infinite-length portrait animations from static images. Accepted to CVPR 2026, it represents a significant advance in applying diffusion models to live streaming scenarios, enabling expressive facial animation driven by video input with low enough latency for real-time use. Developed by GVCLab, the project has earned 2,100 GitHub stars and 297 forks since its November 2025 release, with active development continuing through February 2026. ## Real-Time Diffusion for Live Streaming PersonaLive solves one of the hardest problems in portrait animation: generating high-quality, temporally consistent video frames fast enough for live streaming. Traditional diffusion models produce impressive results but are far too slow for real-time applications. PersonaLive introduces a streamable inference pipeline that generates frames continuously without the per-clip processing overhead typical of diffusion-based video models. ## TensorRT Acceleration The framework supports TensorRT optimization, delivering approximately 2x speedup over standard PyTorch inference. This brings processing time into the range needed for interactive streaming. On a GPU with 12GB VRAM, the streaming strategy enables memory-efficient inference that would otherwise require significantly more hardware. ## Driving Video Input PersonaLive takes a static portrait image and a driving video as input. The system maps facial expressions, head movements, and subtle gestures from the driving video onto the static portrait, producing natural and expressive animation. The quality of expression transfer is a key reason for the CVPR 2026 acceptance. ## Multiple Deployment Options The project provides several interfaces: a Gradio-based Web UI for local experimentation, WebRTC streaming for browser-based real-time use, and ComfyUI integration for incorporation into existing creative pipelines. This flexibility makes it accessible to researchers, content creators, and developers building live-streaming products. ## Infinite-Length Generation Unlike batch-processing approaches that generate fixed-length video clips, PersonaLive supports infinite-length generation. The streaming architecture maintains temporal consistency across arbitrarily long sessions, which is essential for live streaming applications where uptime is measured in hours rather than seconds. ## CVPR 2026 Acceptance The paper was accepted to CVPR 2026, one of the premier computer vision conferences. This academic validation confirms the technical contribution of the streamable diffusion approach and the quality of the generated animations.