Open Source
Explore the latest AI open-source projects from GitHub and HuggingFace.
Explore the latest AI open-source projects from GitHub and HuggingFace.
FastGS is the official open-source implementation of "FastGS: Training 3D Gaussian Splatting in 100 Seconds," a CVPR 2026 Highlight paper. The project is a general acceleration framework that compresses Gaussian Splatting training from minutes or hours down to roughly 100 seconds while preserving rendering quality, and it is released under the MIT license by the fastgs team. ## Why FastGS Matters 3D Gaussian Splatting has become the dominant technique for real-time neural rendering, but training a single scene can take 30 minutes to several hours on a high-end GPU, which limits experimentation and production iteration. FastGS attacks this bottleneck directly: on Mip-NeRF 360 it trains 3.32 times faster than DashGaussian, and on Deep Blending it achieves a 15.45 times speedup over vanilla 3DGS. Critically, the framework reports state-of-the-art quality within the 100-second training budget, meaning the speedup does not come at a meaningful quality cost. ## Efficient Gaussian Control The core algorithmic contribution is a strict policy for managing Gaussian growth during training. Standard 3DGS pipelines tend to over-densify, producing many redundant Gaussians that increase memory pressure and slow rasterization. FastGS uses loss-map thresholds and both absolute and standard gradient thresholds to decide when to split or retain points, producing a leaner Gaussian set that converges faster without sacrificing fidelity. ## Memory-Efficient Training Because FastGS keeps the active Gaussian count under tight control, GPU memory consumption stays low enough for the framework to run on a wider range of hardware than methods that need large VRAM headroom. This matters for academic labs and indie developers who do not have access to 80GB-class GPUs. ## Multi-Backbone Compatibility Rather than a single monolithic implementation, FastGS is designed as an acceleration framework that integrates with multiple backbones including vanilla 3DGS, Scaffold-GS, and Mip-Splatting. Branches in the repository extend the same acceleration ideas to Fast-D3DGS for dynamic scenes, Fast-DropGaussian for sparse-view reconstruction, and Fast-PGSR for surface reconstruction. This breadth means researchers can apply FastGS to whatever 3DGS variant their work is built on. ## Spherical Harmonics Learning Rate Tuning The authors separately tune learning rates for high-order and low-order spherical harmonics coefficients, recognizing that view-dependent color components converge at different rates than the diffuse base color. This relatively simple change contributes measurably to the overall speedup by avoiding wasted gradient updates on already-converged components. ## Tile Compaction FastGS exposes control over compact box multipliers that determine how splats are distributed across screen-space tiles during rasterization. Better tile compaction reduces wasted GPU work, particularly in scenes where Gaussians cluster heavily in some regions and are sparse in others. ## Dataset Coverage The repository includes training scripts and configurations for MipNeRF360, Tanks & Temples, and Deep Blending, plus the dynamic-scene, sparse-view, and surface-reconstruction branches mentioned above. This makes it straightforward to reproduce the benchmarks and to apply FastGS to new captures using similar configurations. ## Limitations FastGS inherits dependencies and licensing requirements from 3DGS, Taming-3DGS, and Speedy-Splat, which means users must respect those upstream licenses in addition to FastGS's own MIT terms. The 100-second number assumes a high-end consumer or workstation GPU; older hardware will be slower, though still benefit from the acceleration. Like other 3DGS methods, FastGS requires high-quality input camera poses, typically from COLMAP, and is sensitive to capture quality. The framework is research code: production deployments will need additional engineering for robustness and integration with downstream rendering pipelines.
graphdeco-inria
Original reference implementation of 3D Gaussian Splatting for real-time radiance field rendering
ahujasid
Connect Blender to Claude AI via MCP for natural-language-driven 3D scene creation and manipulation.
Tencent Hunyuan
Tencent's open-source 3D asset generation system with 13k+ GitHub stars, creating high-resolution textured 3D models from a single image using a two-stage diffusion pipeline.