Open Source
Explore the latest AI open-source projects from GitHub and HuggingFace.
Explore the latest AI open-source projects from GitHub and HuggingFace.
Kimi K2 is Moonshot AI's state-of-the-art open-source Mixture-of-Experts language model with 1 trillion total parameters and 32 billion activated per token. The architecture features 61 layers including 1 dense layer, 384 experts with 8 selected per token, 128K context length with Multi-Head Latency Attention (MLA), 160K vocabulary size, and SwiGLU activation. The model was pre-trained on 15.5 trillion tokens using the Muon optimizer at unprecedented scale with zero training instability — a significant achievement for trillion-parameter training. Kimi K2 is specifically optimized for agentic tasks including tool calling, code generation, and autonomous reasoning. Released in two variants: Kimi-K2-Base (foundation model) and Kimi-K2-Instruct (post-trained for chat, tool use, and safety alignment). Models are available on Hugging Face in block-fp8 format with recommended inference engines including vLLM, SGLang, KTransformers, and TensorRT-LLM. API access is available via Moonshot's platform with OpenAI and Anthropic-compatible endpoints.