Open Source
Explore the latest AI open-source projects from GitHub and HuggingFace.
Explore the latest AI open-source projects from GitHub and HuggingFace.
LiteRT-LM is Google's open-source inference framework for deploying large language models on edge devices with production-grade performance. Released under the Apache 2.0 license, it powers GenAI features in Chrome, Chromebook Plus, and Pixel Watch. The framework supports cross-platform deployment across Android, iOS (in development), Web, Desktop (Windows/Linux), and IoT devices like Raspberry Pi. Key capabilities include GPU and NPU hardware acceleration for optimal on-device performance, multi-modal input (vision and audio), and function calling for agentic workflows on edge devices. LiteRT-LM supports popular model families including Gemma (with latest Gemma 4 optimizations), Llama, Phi-4, and Qwen. The latest release v0.10.1 (April 3, 2026) focuses on Gemma 4 performance optimizations. The codebase is primarily C++ (76.6%) with Rust, Python, and Kotlin bindings, built with Bazel for cross-platform compilation. Stable APIs are available in Kotlin (Android), Python (prototyping), and C++ (high-performance), with Swift for iOS in development. A CLI tool enables immediate model testing without writing code.