Open Source
Explore the latest AI open-source projects from GitHub and HuggingFace.
Explore the latest AI open-source projects from GitHub and HuggingFace.
SGLang is a high-performance serving framework for large language models and multimodal models, now the de facto industry standard deployed on over 400,000 GPUs worldwide. It introduces RadixAttention for KV cache reuse, a zero-overhead CPU scheduler, and compressed finite state machines for faster structured output decoding. SGLang supports a wide range of models including Llama, Qwen, DeepSeek, Kimi, GLM, and diffusion models, and runs across NVIDIA, AMD, Intel, Google TPU, and Ascend NPU hardware.
ollama
The simplest way to run LLMs locally with 165K+ GitHub stars. One-command deployment, 100+ models, REST API, and multi-platform support.
state-spaces
Linear-time state space model architecture for efficient sequence modeling, rivaling Transformers.
tensorzero
Open-source industrial-grade LLM stack with a Rust gateway (<1ms latency), observability, DICL optimization, evaluation, and built-in A/B experimentation.
openvinotoolkit
Intel's open-source AI inference optimization toolkit supporting PyTorch, TensorFlow, and ONNX across CPU, GPU, and NPU hardware with INT8/FP16 quantization.